Zappa + RDS Connection Issues - django

I'm hoping someone could help me out with some questions regarding VPC. I'm pretty new to AWS and I'm just trying to build a sample web app to get my feet wet with everything. I've been roughly following this guide to try and setup a basic project using Zappa + Django. I've gotten to the state where I'm configuring a VPC and trying to add a Postgres instance that Django/zappa can talk to. Per that article, I've setup up my network like this:
Internet Gateway attached to VPC
4 Public subnets
4 Private subnets
Lambda function in 2 of the private subnets
RDS with subnet group in other 2 private subnets
EC2 box in 1 public subnet that allows SSH from my local IP to forward port 5432 to RDS instance
My issue comes when I try and run migrations on my local machine using "python manage.py makemigrations". I keep getting an error that says "Is the server running on host "zappadbinstance.xxxxx.rds.amazonaws.com" (192.168.x.xxx) and accepting TCP/IP connections on port 5432?".
I'm not sure what step I'm missing. I followed this guide and this post to setup the bastion host, and I know it is working because I am able to (1) ssh from my terminal and (2) establish a database connection using PSequel on my local machine.
I feel like I'm really close but I must be missing something. Any help or pointers would be greatly appreciated.

First, nice job on getting this set up - it's quite a challenge. I agree with you that you're almost there. Since you can connect with PSequel from your local system, that validates that your machine is accurately connected to the VPC RDS from a network perspective.
Next area to look at is the Django setup. If the local machine Django settings are incorrect, this would cause the error. So your database section in your settings file should be different on the local machine. As you describe in one of your comments above, I believe you have
'HOST': 'xxxxx.us-east-2.rds.amazonaws.com'
When you run python manage.py makemigrations, django attempts to use that host name and connect to it. Unfortunately, this bypasses your carefully constructed ssh tunnel.
To fix this, you can either:
Edit your local settings.py to have 'HOST':'127.0.0.1'
Edit your /etc/hosts file to point to the FQDN above (but I wouldn't recommend this since often I forget to remove the edits)
Should be easy enough to try #1 above and see if that works.

Related

Publishing Django project in EC2 AWS instance

I am trying to publish a Django Web page in an instance of AWS. I have configured the EC2 instance based on Windows machine, with elastic IP to get a fixed public IP. However, when I try to run the Django project, I can only assign the private IP. In this case, if, after running the project, I try to access the Web from the public IP, I cannot access the Web site (just an error message from the browser warning about the impossibility to access the site).
Below, the two snapshots of the execution. While the first one shows how with the private IP it works, the second one depicts the error I am receiving.
I have tried with the 0:8000, but also with <public_IP>:8000 without success. In the case of setting the public IP, I got Error: That IP address can't be assigned to. Last but not least, in the settings.py file, I have included ALLOWED_HOSTS = ['*'].
And, of course, during this test phase, I have allowed the traffic on the TCP ports within the security premises on the EC2 instance. Later on, I will remove some permissions, but I would prefer starting from the most open scenario.
Could you let me know how to proceed, please? Many thanks in advance.

Wordpress running on EC2 t3.small becomes unavailable (ELB Error 504) after X amount of time, needs rebooting

I have a problem with my Amazon EC2 instance (that did not happened when I was using DigitalOcean).
I've several EC2 instances that are managed by me. My personal EC2 has about 5 Wordpress sites running on a t2.micro instance and the traffic is not high so it is working well in load speed.
Also I have another 2 instances for one of my clients, one t2.micro (running only one Wordpress site) and a t3a.micro (running 4 Wordpress sites). The issue is with all 3 instances (mine and all the 2 of my client).
I have a CloudWatch alarm to notify me by email when Error 504 happen. Since I get the alarm, the website becomes unavailable (Cloudflare shows me Error 504), but I can get into SSH or Webmin. I do service nginx status and all seems to be fine, same to service php7.2-fpm. I do pkill nginx && pkill php* and then service nginx start && service php7.2-fpm start correctly but when I try to enter to the site, the Error 504 is still there.
To test, I decided to install and configure Apache with and without PHP-FPM enabled, same problem. Instance going well and websites running fast but after X amount of hours, it becomes unaccessible via web and the only solution is rebooting...
What's the only thing that solve the issue? Well, rebooting the instance.... After it boots, the websites are available again. Please note that I moved from DigitalOcean to AWS because it is more useful but I can't understand why the problem is happening here and not there since I've a similar instance configured very similar...
In all of the instances I've a setup with:
OS: Ubuntu 18.04
Types: Two t2.micro and one t3a.micro
ELB: Enabled
Security Groups: only allow ports 80, 443 from all the sources.
Database: In a RDS, not on the same instance.
I can provide the logs of everything that you probably can ask but I review all the Nginx and PHP-fpm logs and I can't see any anomalies. Also with syslog and kern.log, but I can provide if it can helps.
Hope you can give me a hand. Thanks for your advice!
EDIT:
I already found the origin of the issue. The problem wasn't in the EC2, all my headache was because I have the RDS set with only one Security Group attached to allow access from my IP to remote management of the databases and the public IPs of the EC2 that runs Wordpress, but I figured that I also need to whitelist the private IPs of those EC2s... Really noob mistake but that was the solution.

Coldfusion Administrator - connect to data source via SSH

I'd like to configure my coldfusion instance to connect to a MySQL database over SSH but I'm not really sure how.
Basically, I have an EC2 instance in the same region as an RDS instance for the purposes of a development environment. I want to hook into my Production RDS instance so that I can do some tests with production data for a specific feature I'm working on but it's turning out to be quite a bit of trouble since it's in a different region.
I'd rather not alter AWS in any way to achieve this. So far the only thing I could think to try was to SSH into my EC2 instance and setup a tunnel like this
ssh -i ./mykey.pem -N -L 3306:localhost:3306 username#host_ip
When I enter this command I don't see any output but I assume it is running, however when I try to access my EC2 instance via the web I see this error: Timed out trying to establish connection
Is there something wrong with my setup? I know I have the correct key, credentials, and host but I am a bit confused on the ports. I figured my coldfusion admin panel is looking on port 3306 and my database is served on port 3306 so 3306:localhost:3306 seems correct to me but obviously I am doing something wrong.

Upload local Vagrant package.box to AWS

So, I've been working locally in a vagrant ubuntu box for the past month: I've spent a lot of time working on customizing it and installing exactly all the software I want on it. I started all of this through the normal vagrant tutorial (aka, nothing special). I packaged my local vagrant box into a package.box file. Now, I want to move my development environment (e.g. package.box file) to an Amazon EC2 instance on AWS. I know I'm not supposed to ask for software recommendations, but my question is basically: is this possible to do and, if it is, could you point me to some examples of people doing it? I've read that packer might be an option, but it looks to me (a very inexperienced perspective) that maybe I should have started with that instead of trying to use it now. Any help would be appreciated - I don't want to spend a couple weeks setting up a new environment when I have one locally set up.
Edit:
Progress! I followed #error2007s link and followed the tutorial. I'm at the point where I've uploaded the VMDK image to s3 and provisioned an instance using it (all done automatically with the ec2-import-instance command on the CLI). However, I don't see a Public IP to access the new instance after I start it up.
I think this is related to cloud-init somehow, but I'm not sure what that is really. I tried it with both the /etc/cloud/cloud.cfg file that came with the box as well as the one listed here and neither of the two boxes I uploaded gave me a Public IP to access.
Edit 2:
Here are some things I see in the Console (They all seem right to me, but a more experienced eye might see something wrong):
subnet info:
Auto-assign Public IP: yes
Network ACL:
VPC info:
DNS resolution: yes
DNS hostnames: yes
ClassicLink DNS Support: no
VPC CIDR: 172.31.0.0/16
DHCP Option Set:
Options: domain-name = ec2.internal domain-name-servers = AmazonProvidedDNS
From my perspective, those all look right, or am I missing something?
I assigned an Elastic IP per these instructions, but when I ssh ec2-user#<elastic-ip>, it says ssh: connect to host <elastic-ip> port 22: Connection refused. The security group assigned to the instance is set to allow all protocols on all ports. Also, this is the first time I encounter a Elastic IP and I'm unsure what exactly it is doing.
Amazon enables you to transfer your Vm to AWS as a EC2 instance. Check this tutorial this is more simple.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/UsingVirtualMachinesinAmazonEC2.html
You want to use the Vagrant AWS provider found here:
https://github.com/mitchellh/vagrant-aws
This is a Vagrant 1.2+ plugin that adds an AWS provider to Vagrant,
allowing Vagrant to control and provision machines in EC2 and VPC.
This will allow you to provision your AWS instances using Vagrant, allowing you to migrate the same local development environment to an AWS EC2 instance.
There is a good tutorial here:
https://nurmrony.wordpress.com/2015/03/15/vagrant-deploy-and-provisioning-an-amazon-ec2-instance/
Hi I have found these articles but I have not yet tested them myself. Im still in the middle of organizing my personal notes and identifying my technology stack. I intend to have a Homestead vagrant box be replicated as an EC2 instance, so I wont have to configure the instance(s) manually.
https://nurmrony.wordpress.com/2015/03/15/vagrant-deploy-and-provisioning-an-amazon-ec2-instance/
https://www.tothenew.com/blog/using-vagrant-to-deploy-aws-ec2-instances/
https://foxutech.com/how-to-deploy-on-amazon-ec2-with-vagrant/
https://blog.scottlowe.org/2016/09/15/using-vagrant-with-aws/
https://devops.com/devops-primer-using-vagrant-with-aws/
I find their approaches similar. The only thing that I am worried at is the "vagrant add box" part.
I asked myselft, what if I had to do this setup again for familiarization purposes, what will happen since I already added a vagrant box (the dummy one, as instructed in the tutorials) previously.

AWS: Cannot connect to Amazon instance

I had been trying to establish a MongoDB database with an exposed REST API (through Crest, then Sleepy Mongoose), but neither of these had been working. I tried to do a minimal sanity test of "Can I connect to that AWS machine or not?", so here's what I tried:
1) I set up a new Amazon instance (Ubuntu 14.04), and I made sure that all incoming TCP connections were accepted.
2) I tried running sudo python -m SimpleHTTPServer 80.
3) This worked when logged into the machine and doing curl http://localhost:80/ and curl http://XX.XX.XX.XX:80/ (the machine's IP address substituted of course). However, on my local machine, the command just timed out.
I'm really looking forward to any guidance here, so I can hopefully go back to what I was originally doing (MongoDB, exposing a REST API, etc.). Really thankful for any suggestions since this has been driving me crazy!!
This is probably a security group issue.
When doing the curl http://XX.XX.XX.XX:80/ on the machine itself, did you try the internal ip (172.x.x.x / 10.x.x.x / 192.x.x.x) or the external ip?
Also, does the machine have an external ip assigned? (I'm guessing it does, otherwise ssh'ing to it would only be possible from another machine in the same subnet.)
Go to the AWS console, open the instance details and check the instance's security groups. Is port 80 open for the world (0.0.0.0/0) ?