How can I create Docker certificates when running on EC2 instances? - amazon-web-services

I'm using Terraform to spin up an EC2 instance. Then, I run an Ansible playbook to configure the instance and run docker-compose.
I noticed that Ansible's docker_service requires key_path and cert_path to be set when running against a remote Docker daemon. I'd like to use a remote connection, as the Docker images required have already been built locally as part of my CI/CD process.
How can I generate the cert and key for an EC2 instance?
I found this question: How do I deploy my docker-compose project using Terraform?, but it uses remote-exec from Terraform and assumes the project source will exist on the remote machine.
There are a few questions which provide reasonable workflows, but they all involve using docker-machine which is not quite what I want.

I believe I found it:
https://docs.docker.com/engine/security/https/
This document describes how to create the CA, server key-cert, and client key-cert.
And, the following document describes how to make a cert using Ansible:
https://docs.ansible.com/ansible/2.4/openssl_certificate_module.html

Related

How to connect a lambda to a database accessible locally on Mac's localhost when using sam

Background
I have a lambda that is connected to a RDS database. The RDS database and lambda are in a VPC. The db is only accessible to developers via a Bastion instance.
During development we can test the lambda using sam. This works fine for APIs that don't depend on the database.
For APIs that depend on the database, I would ideally like to connect to the database instance running in our Gamma stage. However, we can't directly connect to it because it is in a VPC.
What I have tried
To get around this, we can use the SSM agent on the bastion instance with port forwarding so that the database is accessible on our Mac's localhost. See instructions. Sample code below:
aws ssm start-session --target <instance-id> --document-name AWS-StartPortForwardingSessionToRemoteHost --parameters host="mydb.example.us-east-2.rds.amazonaws.com",portNumber="3306",localPortNumber="3306"
I can now connect to this locally at http://127.0.0.1:3306/ via CLI or a GUI like PSequel. No need to use SSH.
However, if I try to get the lambda to connect to http://127.0.0.1:3306/, I get the error Connection refused.
My understanding is that this is because 127.0.0.1 resolves to the docker container's localhost rather than my machine's localhost.
According to docker docs, host.docker.internal ... resolves to the internal IP address used by the host
However, if I try to get the lambda to connect to http://host.docker.internal:3306/, I get the error Name or service not known.
Minimal Working Example
I have created a MWE at https://github.com/bluprince13/sam-app-connect-to-host-localhost. Instead of trying to connect to a database, we can just run a Python server locally, and try to get the lambda to connect to it.
Question
How to connect a lambda to a database accessible locally on Mac's localhost when using sam?
I'm open to any alternatives for testing our lambda locally. Deploying to AWS is too much of a pain even with cdk hotswap.
References
[Question] Is it possible to connect to a database running on localhost? #2272
Cannot call API running on separate localhost port #260
How to connect RDS instance when running SAM locally?
The MWE provided looks correctly configured. The issue is with docker configuration. As OP could figure out, there was a dns override in the configuration. (Docker -> Preferences -> Docker Engine) was overridden. After removing it, everything worked fine with host.docker.internal.
In general, to connect to the localhost of your host machine from a container you have to use host.docker.internal in mac and windows. Refer the SO post for configurations in other platforms. Specific to SAM in macOS, it is advisable to have the following, to avoid hard coding of parameters:
Create an Environment variable in template.yaml under your resource property.
Properties:
Environment:
Variables:
DB_HOST: *your_database_url*
Create an env.json file with following configuration.
{
"*Logical_ID of your resource*": {
"DB_HOST": "host.docker.internal"
}
}
Run your SAM with the env.json as sam local invoke --env-vars env.json.

how to access self managed docker registry hosted on AWS EC2 from windows machine?

I want to setup a self managed docker private registry on an EC2 instance without using AWS ECR/ECS services i.e. using the docker registry:2 container image and make it accessible to the development team so that they can push/pull docker images remotely.
The development team has windows laptop with "docker for windows" installed in it.
Please note:
The EC2 instance is hosted on private subnet.
I have already created a AWS-ALB with openssl self-signed certificate and attached it to the EC2 so that the server can be accessed over HTTPS Listener.
I have deployed docker registry using below command:
docker run -d -p 8080:5000 --restart=always --name registry registry:2
I think pre-routing of 443 to 8080 is done because when I hit the browser with
https:///v2/_catalog I get an output in json format.
Currently, the catalog is empty because there is no image pushed in the registry.
I expect this docker-registry hosted on AWS-EC2 instance to be accessible remotely i.e. from windows remote machine as well.
Any references/suggestions/steps to achieve my task would be really helpful.
Hoping for a quick resolution.
Thanks and Regards,
Rohan Shetty
I have resolved the issue by following the below steps:
added --insecure-registry parameter in the docker.service file
created a new directory "certs.d/my-domain-name" at path /etc/docker.
( Please note: Here domain name is the one at which docker-registry is to be accessed)
Placed the self-signed openssl certificate and key for the domain-name inside the above mentioned directory
restart docker

How to connect to specific instance behind Elastic Load Balancer

I'm deploying my app via Elastic Beanstalk, which creates and Elastic load balancer and puts all my instances behind it (3 or more).
Is there a way to contact each of these instances directly? I want to trigger a specific command on each instances (git pull command to synchronize with the latest code in my remote repo).
I have the list of IP address and public DNS of the instances from PHP SDK but since the firewalls rules restricts the source of IP address to the elastic load balancer IP on port 80, I can't seem to access them directly.
Is there a way around it?
P.S. The SSH port seems to open for all traffic, but how can I create a trigger with that? I'm hoping to create a PHP script to automate this with a webhook on the remote repo.
I highly suggest you use the EB CLI with git integration for all deployments, no matter how small. It is great because you can map a git branch to an environment with eb use YOUR_ENV then when you run eb deploy with that branch checked out it will deploy to that environment.
There is a lot of work involved in ensuring multiple servers pull the correct code and everything is working as expected. What if a server is in the processes of spinning up but is not ready for SSH so your script skips it and it does not get the new code?
Also, what happens when a new server spins up but it is using the old application because that's what is in EB? You could have your kickstart do a git pull but then what happens when you are not ready to push, a new server starts and is alone with the new code?
I could probably find 5 more edge cases without breaking a sweat. Look into eb deploy, you will be happy you did.
You need to setup a CI (or make a simple web service) and create a webhook in your repository. Your CI need to get all instances under your Elastic Beanstalk environment and then call git pull via SSH.
Or, just create a cron job in your all instances via .ebxensions script.
I thought it's not a good practice in Elastic Beanstalk to run git pull in order to synchronize your app with your git repo. Because, it misused the Application Version semantic meaning. Sometimes, you can't determine which app version are in your instances from Application Version. It's better to create a new Application Version in Elastic Beanstalk to deploy a new app version.
If you host your repo in Github, you can take a look into CodeDeploy.

running a docker loop device on aws

I'm new to aws and am having some issues with getting my mobile app back running again. Forgive me if this question seems vague.
For a school project we created a mobile app on aws and deployed using docker containers (another student managed these tasks). When trying to get my own key pair to ssh into my ec2 instance i detached the volume associated with my instance and reattached it after getting my own key pair. Now i can ssh into my instance but my front end cant talk to my web server.
So my question is, do i create a new application on elastic beanstalk to deploy my app? Even though when i run lsblk is shows a have a docker loop device and when i run docker images i see several that match the name of my application? or do i somehow get the container running again, docker run doesn't seem to be working.
No need, just upload a new update into Elastic Beanstalk. AWS will handle the rest.
FYI, Elastic Beanstalk - Single Docker Container update process (simple under the hood):
You upload the update into AWS.
AWS will put it on your S3.
Inside your EC2, there is an Elastic Beanstalk agent. It will check for a new update.
If there is an update, the agent will download the update file and extract it.
The agent will build a new Docker image.
If the build is success, it will generate a new config to tell Nginx (web proxy) the new web server container.
Nginx will be reloaded.
Your old docker container will be destroyed.
Don't change anything inside EC2 of Elastic Beanstalk, except you know what you do. Elastic Beanstalk is design for automate deployment and scaling. So, if you change something in your EC2 manually, it might be lost. Of course, you can modified your EC2 instance, but you need to automate it using .ebextensions or take an image.

vagrant, puppet, aws but without vagrant on aws

So I have been googling for a while now and either I have completed the internet or I cannot articulate my search query to find the answer so I thought I would come here.
So my team and I want to use vagrant on our local machines which is fine. We want to use puppet for our configs. Now we don't want vagrant inside inside our AWS/DigitalOcean/Whatever providers instance. How do I get the puppet config to automatically build the instance for us ?
I am a little stuck, I think I need a puppet master but how does the AWS instance for example get built based on the puppet config and how does vagrant use the same config ?
Thanks
That's the default behavior if you install vagrant on your local workstation and configure an instance for AWS. Vagrant will connect over SSH to the instance and install client software (in this case puppet) to configure the instance.
In short: Vagrant will not install itself on any AWS-Instance.
Here's a link to the Vagrant-AWS Plugin:
Vagrant-AWS
Further information:
Vagrant uses providers to create VM's. The normal workflow is to use for example the virtualbox provider (which is build into vagrant) to create local VM's. You can set attributes for the specific provider in the Vagrantfile. In this case you need the Vagrant aws provider (which is a plugin -> vagrant plugin install <pluginname> command). Thus you can create VM's remotely. Just as with the virtualbox provider vagrant will not install itself on the created VM (remotely or not doesn't matter)
vagrant use masterless provisioning (Puppet Apply): script is running inside your vagrant box.
To provision machine in cloud you need puppet master server and puppet clients.
For automatically bootstrapping clients you can add shell script inside your server 'user-data': Digital Ocean , AWS EC2.
This script is responsible for installing puppet and connecting to master server.