What's the correct way to use Amazon's Elasticache service (with the Memcached engine) with Django's MemcachedCache backend?
I have a local Memcached service running locally, which works fine with the Django setting:
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': '127.0.0.1:11211',
}
}
I thought using Elasticache would be as simple as creating the Memcached cluster instance and then changing my setting to:
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'instance-name.abcdef.cfg.use1.cache.amazonaws.com:11211',
}
}
However, when I test this locally, the cache silently fails and doesn't successfully store anything.
What am I doing wrong? How do I get the MemcachedCache backend to show a real error message? Do I need to use a Elasticache-specific Django backend like this?
You're unable to connect to ElastiCache instances from outside of AWS's network. Even though your security groups might have exceptions in to allow traffic from your IP address (or the entire internet), AWS's network will not accept any traffic to it that does not originate from within their network.
This configuration is fine, however will only work from an EC2 instance.
Alternatively you can follow this guide (which also confirms my answer above) which basically involves you spinning up an EC2 instance who's IP address you will use in your CACHES configuration instead. This instance is configured to do NAT between incoming traffic on port 11211 and forward it onto your ElastiCache node. This configuration is far from ideal, and shouldn't ever be used in production though.
Related
I have an application deployed on Cloud Run. It runs behind an HTTPS Load balancer.
I want it to be able to cache some data using memory store service. I basically followed the documentation to use a serverless vpc connector but this exception keeps poping:
Unable to create a new session key. It is likely that the cache is
unavailable.
I am guessing that my cloud run service can't access memorystore.
On Django I have:
CACHES = {
"default": {
"BACKEND": "django_redis.cache.RedisCache",
"LOCATION": f"redis://{CHANNEL_REDIS_HOST}:{CHANNEL_REDIS_PORT}/16",
"OPTIONS": {
"CLIENT_CLASS": "django_redis.client.DefaultClient",
"IGNORE_EXCEPTIONS": True,
},
"KEY_PREFIX": "api"
}
}
where CHANNEL_REDIS_HOST is the IP from my memorystore primary endpoint and CHANNEL_REDIS_PORT is the port.
When I run this command:
gcloud redis instances describe instance_name --region region --format "value(authorizedNetwork)"
it returns projects/my_project/global/networks/default.
Then, on the VPC network, I clicked on 'default' and then on 'ADD SUBNET'. I created my subnet with IP Address range 10.0.0.0/28. Maybe the problem comes from this step as I do not get a lot about this all IP Communication thing..
When I run this command:
gcloud compute networks subnets describe my subnet
purpose is PRIVATE as intended and network is https://www.googleapis.com/compute/v1/projects/my_project/global/networks/default.
So I think that my memorystore instance and my subnet are able to connect.
Then, I created a serverless VPC connector, using the same region, the default network and the subnet I just created.
Finally on my service I set the VPC connector to the one I just created and I redeploy using Only route requests to private IPs through the VPC connector option, if I choose Route all traffic through the VPC connector my deployment fails, I think probably because I am behind a load balancer, anyway I do not want to route all traffic to my connector.
And after doing this, I still receive the error mention at the beginning of the message..
Any ideas ?
Thanks
So I think my issue was using the db 16. As the maximum number of database on memorystore is 16 it must be from 0 to 15. Changing it make it works.
I'm setting up pritunl and I want to use Amazon Document DB instead Mongo DB or Mongo DB Atlas. This is for a vpn idea that I had, server is running ubuntu 18.04 and followed the standard install guide from pritunl, I have it working correctly with MongoDB but for DR and scaling purposes I owuld like to get this into the AWS Document DB.
I have modified the /etc/pritunl.conf file and replaced the default local mongodb_uri with the document DR URI. restarted the service and I would exspect to see the pritunl login/setup page there. I have opened the retired ports on the box and run thought the pritunl setup page and it just hangs
"mongodb_uri": null,
"log_path": "/var/log/pritunl.log",
"static_cache": true,
"temp_path": "/tmp/pritunl_de511dc19aaf497dbdf67df0a0634e3d",
"bind_addr": "0.0.0.0",
"www_path": "/usr/share/pritunl/www",
"local_address_interface": "auto",
"port": 443
"mongodb_uri": mongodb://user:<***************>#vpn-pritunlleu-west-1.docdb.amazonaws.com:27017/test,
"log_path": "/var/log/pritunl.log",
"static_cache": true,
"temp_path": "/tmp/pritunl_de511dc19aaf497dbdf67df0a0634e3d",
"bind_addr": "0.0.0.0",
"www_path": "/usr/share/pritunl/www",
"local_address_interface": "auto",
"port": 443
turns out that DocumentDB does not support tailable cursors or capped collections which is a fundamental requirement for pritunl so going to used MongoDB Atlas or something like that.
A hanging connection is typically a result of the client not being able to connect to the DocumentDB cluster because the cluster's security group does not allow inbound connections on 27017 or the client is in a different VPC as the cluster.
For troubleshooting, please see: https://docs.aws.amazon.com/documentdb/latest/developerguide/troubleshooting.html#troubleshooting.cannot-connect
I am working on a Django project that uses Zappa to host a serverless app on Lambda. It uses a Postgres database on the back and I've been able to use it flawlessly for some time. Recently I needed to use urllib, and so I needed a NAT instance (EC2 micro instance) to allow Lambda to access the internet.
Now that it's set up, it works fine on production, I can see my site fine and all the pieces interact correctly. However, locally, Django can't seem to connect, it gets this error:
django.db.utils.OperationalError: could not connect to server: Connection timed out (0x0000274C/10060)
Is the server running on host "XXXXXXXXX.XXXXXXXXX.us-west-2.rds.amazonaws.com" (54.70.245.158) and accepting
TCP/IP connections on port 5432?
To outline the steps I've gone through, I created a VPC network with private and public subnets through the wizard. I added 2 more private subnets in other zones for availability. I went to my Lambda function and changed the subnets to the new subnets and I also moved my RDS to the same subnets (private ones). For my RDS, I created a new security group for Postgres (port 5432 inbound with source 0.0.0.0/0).
My settings.py under Django remains the same:
DATABASES = {
# AMAZON RDS Instance
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'XXXXXXXXX',
'USER': 'XXXXXXXXX',
'PASSWORD': 'XXXXXXXXX',
'HOST': 'XXXXXXXXXX.XXXXXXXXXX.us-west-2.rds.amazonaws.com',
'PORT': '5432',
}
}
I'm not sure where to go from here. I can honestly say this is out of my comfort zone and I don't know what I'm doing. My suspicion is there's something I need to do with the security group, but I'm in over my head and would really appreciate some help. Thanks!
After some fiddling, I realize that I was over complicating things. The RDS should remain on the subnets it launched with, there is no need to bring it over to the same subnets as the NAT instance. Once I moved it back to the original subnets, it functioned fine both locally and in production.
Hi I am trying to connect my Django application to use Redis ElastiCache and am having trouble with getting it connected using AWS. The application is published to an EC2 instance using Elastic Beanstalk and it running perfect when I am not trying to connect to my Redis cache.
From the post here (Setting up ElastiCache Redis with Elastic BeanStalk + Django) I created my ElastiCache to not use a cluster and I have set up both the EC2 instance and the Redis cache to use the same Security Group.
Here is how my cache is configured in settings.py.
CACHES = {
'default': {
'BACKEND': 'django_redis.cache.RedisCache',
'LOCATION': 'redis://my-cache.kjshd.0001.use2.cache.amazonaws.com:6379/',
'OPTIONS': {
'CLIENT_CLASS': 'django_redis.client.DefaultClient'
}
}
}
What am I missing? Are there additional settings that need changed on my cache or somewhere in AWS to open communication? Does this configuration look okay? I was previously using Redis in Azure and this configuration worked but now have the requirement to move to AWS. Is there a way to test that my EC2 instance can connect to Redis? I have the ability to SSH into the server but I was not sure what I would do once I was connected.
Thanks for any help.
After setting the security group, I found out that I needed to change the inbound settings for the security group to connect to my ElastiCache Redis node.
Documentation was found here.
https://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/GettingStarted.AuthorizeAccess.html#GettingStarted.AuthorizeAccess.VPC
I am trying to set up an Amazon Server to host a dynamic website I'm currently creating. I have the domain bought on GoDaddy.com, and I believe that what I've done so far has linked the domain to my Amazon account.
I followed this tutorial : http://www.mycowsworld.com/blog/2013/07/29/setting-up-a-godaddy-domain-name-with-amazon-web-services/
In short, this walked me through setting up and Amazon S3 (Simple Storage Service) and Amazon Route 53. I then configured the DNS Servers, and my website now launches properly on the domain.
I'm not sure on the next step from here, but I would like to set up:
-A database server
-Anything else that might be necessary to run a dynamic website.
I am very new to hosting websites, and semi-new to web development in general, so the more in depth the better.
Thanks a lot
You have two options on AWS. Run an EC2 server and setup your application or continue to use the AWS managed services like S3.
Flask apps can be hosted on Elastic Beanstalk and
your database can be hosted on RDS (Relational Database Service). Then the two can be integrated.
Otherwise, spin up your own t2.micro instance in EC2. Log in via ssh and set up the database server and application like you have locally. This server could also host the (currently S3 hosted) static files too.
I have no idea what your requirements are, personally I would start with setting up the EC2 instance and go from there as integrating AWS services is without knowing what you need is probably not the easiest first step.
Heroku might be another option. They host their services on AWS and give you an end to end solution for deploying and running your python code without getting your hands dirty setting up servers.