I am working on a Django project that uses Zappa to host a serverless app on Lambda. It uses a Postgres database on the back and I've been able to use it flawlessly for some time. Recently I needed to use urllib, and so I needed a NAT instance (EC2 micro instance) to allow Lambda to access the internet.
Now that it's set up, it works fine on production, I can see my site fine and all the pieces interact correctly. However, locally, Django can't seem to connect, it gets this error:
django.db.utils.OperationalError: could not connect to server: Connection timed out (0x0000274C/10060)
Is the server running on host "XXXXXXXXX.XXXXXXXXX.us-west-2.rds.amazonaws.com" (54.70.245.158) and accepting
TCP/IP connections on port 5432?
To outline the steps I've gone through, I created a VPC network with private and public subnets through the wizard. I added 2 more private subnets in other zones for availability. I went to my Lambda function and changed the subnets to the new subnets and I also moved my RDS to the same subnets (private ones). For my RDS, I created a new security group for Postgres (port 5432 inbound with source 0.0.0.0/0).
My settings.py under Django remains the same:
DATABASES = {
# AMAZON RDS Instance
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'XXXXXXXXX',
'USER': 'XXXXXXXXX',
'PASSWORD': 'XXXXXXXXX',
'HOST': 'XXXXXXXXXX.XXXXXXXXXX.us-west-2.rds.amazonaws.com',
'PORT': '5432',
}
}
I'm not sure where to go from here. I can honestly say this is out of my comfort zone and I don't know what I'm doing. My suspicion is there's something I need to do with the security group, but I'm in over my head and would really appreciate some help. Thanks!
After some fiddling, I realize that I was over complicating things. The RDS should remain on the subnets it launched with, there is no need to bring it over to the same subnets as the NAT instance. Once I moved it back to the original subnets, it functioned fine both locally and in production.
Related
I'm developing an app in Django. During the development process, I've used an Amazon RDS PostgreSQL DB (using a free Dev/Test template). The database configuration for the app is straightforward:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'db_name',
'USER': 'postgres',
'PASSWORD': 'db_password',
'HOST': 'AWS_endpoint',
'PORT': '5432'
}
}
I've decided to create a docker image and use Amazon ECS to deploy the app.
When I run the app as a docker container, it works just fine with the current database configurations; however, I've not seen any tutorials that discuss this solution to deploying a docker container and a database (i.e. creating an image of the app and using a hosted db solution - in this case AWS RDS PSQL). As an aside, most of the tutorials show the image being constructed with both the database and Django site on the same image, but that seems like a bad idea.
My question: In a production environment, is it acceptable for me to connect my docker container (Django) to my database (Amazon RDS PostgreSQL) in the manner I've described above, only using a new production db instance?
At this point, I'm convinced that the answer to my question is obviously stupid (i.e., "of course, why would you ask such a thing?" or "absolutely not, why would you ask such a thing?"), because I can't find an answer anywhere.
Thanks.
Use a three tier architecture.
Publid subnet: Load balancer
Private subnet: ECS cluster
Restrcted subnet: No routes to NAT Gateway. Access possible only from ECS cluster via a security group.
https://github.com/aws-samples/vpc-multi-tier
Remove DB password. Store it in AWS Secrets manager. Inject it at runtime dynamically.
https://aws.amazon.com/premiumsupport/knowledge-center/ecs-data-security-container-task/
I'm hoping someone could help me out with some questions regarding VPC. I'm pretty new to AWS and I'm just trying to build a sample web app to get my feet wet with everything. I've been roughly following this guide to try and setup a basic project using Zappa + Django. I've gotten to the state where I'm configuring a VPC and trying to add a Postgres instance that Django/zappa can talk to. Per that article, I've setup up my network like this:
Internet Gateway attached to VPC
4 Public subnets
4 Private subnets
Lambda function in 2 of the private subnets
RDS with subnet group in other 2 private subnets
EC2 box in 1 public subnet that allows SSH from my local IP to forward port 5432 to RDS instance
My issue comes when I try and run migrations on my local machine using "python manage.py makemigrations". I keep getting an error that says "Is the server running on host "zappadbinstance.xxxxx.rds.amazonaws.com" (192.168.x.xxx) and accepting TCP/IP connections on port 5432?".
I'm not sure what step I'm missing. I followed this guide and this post to setup the bastion host, and I know it is working because I am able to (1) ssh from my terminal and (2) establish a database connection using PSequel on my local machine.
I feel like I'm really close but I must be missing something. Any help or pointers would be greatly appreciated.
First, nice job on getting this set up - it's quite a challenge. I agree with you that you're almost there. Since you can connect with PSequel from your local system, that validates that your machine is accurately connected to the VPC RDS from a network perspective.
Next area to look at is the Django setup. If the local machine Django settings are incorrect, this would cause the error. So your database section in your settings file should be different on the local machine. As you describe in one of your comments above, I believe you have
'HOST': 'xxxxx.us-east-2.rds.amazonaws.com'
When you run python manage.py makemigrations, django attempts to use that host name and connect to it. Unfortunately, this bypasses your carefully constructed ssh tunnel.
To fix this, you can either:
Edit your local settings.py to have 'HOST':'127.0.0.1'
Edit your /etc/hosts file to point to the FQDN above (but I wouldn't recommend this since often I forget to remove the edits)
Should be easy enough to try #1 above and see if that works.
Hi I am trying to connect my Django application to use Redis ElastiCache and am having trouble with getting it connected using AWS. The application is published to an EC2 instance using Elastic Beanstalk and it running perfect when I am not trying to connect to my Redis cache.
From the post here (Setting up ElastiCache Redis with Elastic BeanStalk + Django) I created my ElastiCache to not use a cluster and I have set up both the EC2 instance and the Redis cache to use the same Security Group.
Here is how my cache is configured in settings.py.
CACHES = {
'default': {
'BACKEND': 'django_redis.cache.RedisCache',
'LOCATION': 'redis://my-cache.kjshd.0001.use2.cache.amazonaws.com:6379/',
'OPTIONS': {
'CLIENT_CLASS': 'django_redis.client.DefaultClient'
}
}
}
What am I missing? Are there additional settings that need changed on my cache or somewhere in AWS to open communication? Does this configuration look okay? I was previously using Redis in Azure and this configuration worked but now have the requirement to move to AWS. Is there a way to test that my EC2 instance can connect to Redis? I have the ability to SSH into the server but I was not sure what I would do once I was connected.
Thanks for any help.
After setting the security group, I found out that I needed to change the inbound settings for the security group to connect to my ElastiCache Redis node.
Documentation was found here.
https://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/GettingStarted.AuthorizeAccess.html#GettingStarted.AuthorizeAccess.VPC
First time using AWS services with Django.
Was wondering how to configure the Django app running in a EC2 instance to a Postgres database in RDS?
the EC2 is running ubuntu 14.04
Any special configuration required?
All you need to do before doing the normal migration documented in the official tutorial is to be sure to have your RDS instance available and accessible to your EC2 instance.
Then, what you need to do is to modify the settings.py file of your app in the DATABASES section in the following way:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': '<name defined upon RDS creation>',
'USER': '<username defined upon RDS creation>',
'PASSWORD': '<password defined upon RDS creation>',
'HOST': '<this can be found in "Endpoint" on your RDS dashboard, exclude the ":" and the port number>',
'PORT': '5432',
}
}
Following this, continue to migrate normally and all should work well.
If you are able to use a deployment service, take a look at at AWS Elastic Beanstalk. It combines EC2, RDS and S3 storage into a Docker and helps keep them together. It's really easy to connect your RDS instance to your EC2 instance(s). I just launched a Django project using that a few weeks ago.
What's the correct way to use Amazon's Elasticache service (with the Memcached engine) with Django's MemcachedCache backend?
I have a local Memcached service running locally, which works fine with the Django setting:
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': '127.0.0.1:11211',
}
}
I thought using Elasticache would be as simple as creating the Memcached cluster instance and then changing my setting to:
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'instance-name.abcdef.cfg.use1.cache.amazonaws.com:11211',
}
}
However, when I test this locally, the cache silently fails and doesn't successfully store anything.
What am I doing wrong? How do I get the MemcachedCache backend to show a real error message? Do I need to use a Elasticache-specific Django backend like this?
You're unable to connect to ElastiCache instances from outside of AWS's network. Even though your security groups might have exceptions in to allow traffic from your IP address (or the entire internet), AWS's network will not accept any traffic to it that does not originate from within their network.
This configuration is fine, however will only work from an EC2 instance.
Alternatively you can follow this guide (which also confirms my answer above) which basically involves you spinning up an EC2 instance who's IP address you will use in your CACHES configuration instead. This instance is configured to do NAT between incoming traffic on port 11211 and forward it onto your ElastiCache node. This configuration is far from ideal, and shouldn't ever be used in production though.