Upload local Vagrant package.box to AWS - amazon-web-services

So, I've been working locally in a vagrant ubuntu box for the past month: I've spent a lot of time working on customizing it and installing exactly all the software I want on it. I started all of this through the normal vagrant tutorial (aka, nothing special). I packaged my local vagrant box into a package.box file. Now, I want to move my development environment (e.g. package.box file) to an Amazon EC2 instance on AWS. I know I'm not supposed to ask for software recommendations, but my question is basically: is this possible to do and, if it is, could you point me to some examples of people doing it? I've read that packer might be an option, but it looks to me (a very inexperienced perspective) that maybe I should have started with that instead of trying to use it now. Any help would be appreciated - I don't want to spend a couple weeks setting up a new environment when I have one locally set up.
Edit:
Progress! I followed #error2007s link and followed the tutorial. I'm at the point where I've uploaded the VMDK image to s3 and provisioned an instance using it (all done automatically with the ec2-import-instance command on the CLI). However, I don't see a Public IP to access the new instance after I start it up.
I think this is related to cloud-init somehow, but I'm not sure what that is really. I tried it with both the /etc/cloud/cloud.cfg file that came with the box as well as the one listed here and neither of the two boxes I uploaded gave me a Public IP to access.
Edit 2:
Here are some things I see in the Console (They all seem right to me, but a more experienced eye might see something wrong):
subnet info:
Auto-assign Public IP: yes
Network ACL:
VPC info:
DNS resolution: yes
DNS hostnames: yes
ClassicLink DNS Support: no
VPC CIDR: 172.31.0.0/16
DHCP Option Set:
Options: domain-name = ec2.internal domain-name-servers = AmazonProvidedDNS
From my perspective, those all look right, or am I missing something?
I assigned an Elastic IP per these instructions, but when I ssh ec2-user#<elastic-ip>, it says ssh: connect to host <elastic-ip> port 22: Connection refused. The security group assigned to the instance is set to allow all protocols on all ports. Also, this is the first time I encounter a Elastic IP and I'm unsure what exactly it is doing.

Amazon enables you to transfer your Vm to AWS as a EC2 instance. Check this tutorial this is more simple.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/UsingVirtualMachinesinAmazonEC2.html

You want to use the Vagrant AWS provider found here:
https://github.com/mitchellh/vagrant-aws
This is a Vagrant 1.2+ plugin that adds an AWS provider to Vagrant,
allowing Vagrant to control and provision machines in EC2 and VPC.
This will allow you to provision your AWS instances using Vagrant, allowing you to migrate the same local development environment to an AWS EC2 instance.
There is a good tutorial here:
https://nurmrony.wordpress.com/2015/03/15/vagrant-deploy-and-provisioning-an-amazon-ec2-instance/

Hi I have found these articles but I have not yet tested them myself. Im still in the middle of organizing my personal notes and identifying my technology stack. I intend to have a Homestead vagrant box be replicated as an EC2 instance, so I wont have to configure the instance(s) manually.
https://nurmrony.wordpress.com/2015/03/15/vagrant-deploy-and-provisioning-an-amazon-ec2-instance/
https://www.tothenew.com/blog/using-vagrant-to-deploy-aws-ec2-instances/
https://foxutech.com/how-to-deploy-on-amazon-ec2-with-vagrant/
https://blog.scottlowe.org/2016/09/15/using-vagrant-with-aws/
https://devops.com/devops-primer-using-vagrant-with-aws/
I find their approaches similar. The only thing that I am worried at is the "vagrant add box" part.
I asked myselft, what if I had to do this setup again for familiarization purposes, what will happen since I already added a vagrant box (the dummy one, as instructed in the tutorials) previously.

Related

AWS ECS Task can't connect to RDS Database

I'm a newer AWS user and today I got stuck while working on a sample project. I successfully created a docker container that runs a simple R script that connects to my AWS RDS MySQL Database and creates & writes some basic files to it. I built a public ECR repository, pushed my docker image there, and built a ECS cluster & task choosing Fargate and using the container image from my repository. My task ran and I could see the R code being executed when I went through the logs, but it was never able to connect to the SQL Database and exited afterwards.
I've had to whitelist my own IP address in the security group for the RDS Database so that I can connect to it, so I'm aware I probably have to do that for my ECS task to establish that connection too. But won't that IP address constantly change because I won't have a static IP for the Fargate Server that is executing my task? I'm trying to stay on the free tier so I'm not sure I want to setup an elastic IP address for this server.
These 2 articles seem close if not the same issue I'm having but I can't figure out a solution. I haven't found any other info.
https://aws.amazon.com/premiumsupport/knowledge-center/ecs-fargate-task-database-connection/
https://aws.amazon.com/premiumsupport/knowledge-center/ecs-fargate-static-elastic-ip-address/
The end goal is to get this sample project successfully running on a scheduled fixed interval, and then running actual scripts on there to help automate things and make my life easier, so this sample project is a first step towards that. Any help or info on the questions I'm having would be appreciated !
Yes, your task is ephemeral (whether you launch it manually or as part of an ECS service) and its private/public ip address may change over time if it gets replaced. The way you'd make the connectivity rules to stick is to assign a security group to the task (that may have inbound access on a specific port you need I assume and outbound to everything) and assign another security group to the RDS db that has inbound access on port 3306 for the security group you assigned to the task (this is the trick, the SG will not change and you are telling RDS to allow access to ALL traffic coming from that SG). I see the first article you posted doesn't talk about this part (it should).

authentication failure between 2 ec2 instances with windows server 2016

I am a newbie to AWS Cloud. Recently I was given the requirement to do a Automation Anywhere Clustered Control Room installation on AWS Cloud. Based on this requirement, I set up 2 EC2 instances (as a test run) with Windows Server 2016 AMI. I installed MS SQL server on one of the instances and opened port 1433 for access from the other instance. I installed Control Room on the first instance successfully (using custom install). When I completed the installation on the second instance, I got credential vault error. I have created a shared folder which is accessible by both the instances inspite of which I am getting the error. I have security groups and firewalls setup appropriately alsoI have shared the snapshot below. I have been informed that there is an authentication issue between the 2 instances. How do I get this to work?
Any and all help is much appreciated.
I don't know if this is a duplicate of any other question. If it is, please point me in the right direction.
I was able to solve the problem. I reinstalled the control room on both the EC2 machines with Manual mode for the Credential Vault access.
I also reset the firewall to allow only 80 and 443 (for now) both locally and remotely on the second EC2 instance.

SSL Install on AWS

I've been tasked with getting a new SSL installed on a website, the site is hosted on AWS EC2.
I've discovered that I need the key pair in order to connect to the server instance, however the client doesn't have contact with the former web master.
I don't have much familiarity with AWS so I'm somewhat at a loss of how to proceed. I'm guessing I would need the old key pair to access the server instance and install the SSL?
I see there's also the Certificate Manager section in AWS, but don't currently see an SSL in there. Will installing it here attach it to the website or do I need to access the server instance and install it there?
There is a documented process for updating the SSH keys on an EC2 instance. However, this will require some downtime, and must not be run on an instance-store-backed instance. If you're new to AWS then you might not be able to determine whether this is the case, so would be risky.
Instead, I think your best option is to bring up an Elastic Load Balancer to be the new front-end for the application: clients will connect to it, and it will in turn connect to the application instance. You can attach an ACM cert to the ELB, and shifting traffic should be a matter of changing the DNS entry (but, of course, test it out first!).
Moving forward, you should redeploy the application to a new EC2 instance, and then point the ELB at this instance. This may be easier said than done, because the old instance is probably manually configured. With luck you have the site in source control, and can do deploys in a test environment.
If not, and you're running on Linux, you'll need to make a snapshot of the live instance and attach it to a different instance to learn how it's configured. Start with the EC2 EBS docs and try it out in a test environment before touching production.
I'm not sure if there's any good way to recover the content from a Windows EC2 instance. And if you're not comfortable with doing ops, you should find someone who is.

Zappa + RDS Connection Issues

I'm hoping someone could help me out with some questions regarding VPC. I'm pretty new to AWS and I'm just trying to build a sample web app to get my feet wet with everything. I've been roughly following this guide to try and setup a basic project using Zappa + Django. I've gotten to the state where I'm configuring a VPC and trying to add a Postgres instance that Django/zappa can talk to. Per that article, I've setup up my network like this:
Internet Gateway attached to VPC
4 Public subnets
4 Private subnets
Lambda function in 2 of the private subnets
RDS with subnet group in other 2 private subnets
EC2 box in 1 public subnet that allows SSH from my local IP to forward port 5432 to RDS instance
My issue comes when I try and run migrations on my local machine using "python manage.py makemigrations". I keep getting an error that says "Is the server running on host "zappadbinstance.xxxxx.rds.amazonaws.com" (192.168.x.xxx) and accepting TCP/IP connections on port 5432?".
I'm not sure what step I'm missing. I followed this guide and this post to setup the bastion host, and I know it is working because I am able to (1) ssh from my terminal and (2) establish a database connection using PSequel on my local machine.
I feel like I'm really close but I must be missing something. Any help or pointers would be greatly appreciated.
First, nice job on getting this set up - it's quite a challenge. I agree with you that you're almost there. Since you can connect with PSequel from your local system, that validates that your machine is accurately connected to the VPC RDS from a network perspective.
Next area to look at is the Django setup. If the local machine Django settings are incorrect, this would cause the error. So your database section in your settings file should be different on the local machine. As you describe in one of your comments above, I believe you have
'HOST': 'xxxxx.us-east-2.rds.amazonaws.com'
When you run python manage.py makemigrations, django attempts to use that host name and connect to it. Unfortunately, this bypasses your carefully constructed ssh tunnel.
To fix this, you can either:
Edit your local settings.py to have 'HOST':'127.0.0.1'
Edit your /etc/hosts file to point to the FQDN above (but I wouldn't recommend this since often I forget to remove the edits)
Should be easy enough to try #1 above and see if that works.

GitLab CE keeps resetting my external_url

I'm running GitLab CE privately within an AWS VPC that I access via a VPN instance. I installed the latest AWS AMI of GitLab CE, then upgraded it to the latest version of GitLab. I've gotten everything working, except for one thing: Whenever I reboot the instance in EC2, my /etc/gitlab/gitlab.rb's external_url is reset to the IP address of my VPC's SNAT instance, almost as if GitLab is asking "what is my public IP?" and then changing the setting's value to that answer. I keep changing it back to the internal hostname provided by my VPC's Route 53 hosted zone, https://gitlab.corp.mydomain.com, but it's reset every time I reboot the instance. To be clear, this GitLab instance is not exposed to the internet, but it does have egress to the internet through the SNAT (e.g., to update OS packages).
How can I force my internal hostname to stick? I can still access GitLab through my browser at https://gitlab.corp.mydomain.com, so perhaps this doesn't matter?
After a quick search I have found this.
https://gitlab.com/gitlab-org/omnibus-gitlab/merge_requests/2021
Summary of the content:
It seems like Gitlabs hostname detection is not working correctly if public IPs are deactivated in EC2. To enable usage in such cases Gitlab replaces the hostname with the assigned IP. Gitlab will return to the hostname if it can resolve it later, at least from version 10.1.3.
As it works for you, I would simply keep the configuration.