Problem connecting redis and django in GCP - django

I have deployed a Redis instance using GCP Memorystore.
I also have a django app deployed using App Engine. However, I am facing problems connecting these 2. Both are deployed in the same timezone.
The package that I'm using is django_redis. When I try to login to admin page I face a connection error.
The error is:
Exception Value: Error 110 connecting to <Redis instance IP>:6379. Connection timed out.
Exception Location: /env/lib/python3.7/site-packages/redis/connection.py in connect, line 557
In settings.py I use:
CHANNEL_LAYERS = {
"default": {
"BACKEND": "channels_redis.core.RedisChannelLayer",
"CONFIG": {
"hosts": [("<Redis instance IP>", 6379)],
},
},
}
CACHES = {
"default": {
"BACKEND": "django_redis.cache.RedisCache",
"LOCATION": 'redis://<Redis instance IP>/0',
"OPTIONS": {
"CLIENT_CLASS": "django_redis.client.DefaultClient"
}
}
}
Note: With locally installed Redis and set to localhost, everything works fine.

In order to connect to Memorystore, you have to set up a VPC Network for your application, and add that connection into app.yaml into property vpc_access_connector. It's described here in docs: Connecting to a VPC network

Related

Access Django admin from Firebase

I have a website which has a React frontend hosted on Firebase and a Django backend which is hosted on Google Cloud Run. I have a Firebase rewrite rule which points all my API calls to the Cloud Run instance. However, I am unable to use the Django admin panel from my custom domain which points to Firebase.
I have tried two different versions of rewrite rules -
"rewrites": [
{
"source": "/**",
"run": {
"serviceId": "serviceId",
"region": "europe-west1"
}
},
{
"source": "**",
"destination": "/index.html"
}
]
--- AND ---
"rewrites": [
{
"source": "/api/**",
"run": {
"serviceId": "serviceId",
"region": "europe-west1"
}
},
{
"source": "/admin/**",
"run": {
"serviceId": "serviceId",
"region": "europe-west1"
}
},
{
"source": "**",
"destination": "/index.html"
}
]
I am able to see the log in page when I go to url.com/admin/, however I am unable to go any further. It just refreshes the page with empty email/password fields and no error message. Just as an FYI, it is not to do with my username and password as I have tested the admin panel and it works fine when accessing it directly using the Cloud Run url.
Any help will be much appreciated.
I didn't actually find an answer to why the admin login page was just refreshing when I was trying to log in using the Firebase rewrite rule, however I thought of an alternative way to access the admin panel using my custom domain.
I have added a custom domain to the Cloud Run instance so that is uses a subdomain of my site domain and I can access the admin panel by using admin.customUrl.com rather than customUrl.com/admin/.

How to cache with redis being on different server

I have an app server which holds the Django app, and another server for caching. I am thinking to use Redis for caching. How do I pass the IP of the Redis server to my Django app?
use settings.CACHES. If you are using django-redis, you can do the following:
CACHES = {
"default": {
"BACKEND": "django_redis.cache.RedisCache",
"LOCATION": "redis://127.0.0.1:6379/1",
"OPTIONS": {
"CLIENT_CLASS": "django_redis.client.DefaultClient"
},
"KEY_PREFIX": "example"
}
}

Deploy Multiple full-stack React + Express applications with path routing on ELB

I currently have four containerized React + Express applications (port 3001 exposed) sitting on four individual ECS instances with four different CNAMEs. They each sit behind their own nginx service as a reverse proxy.
Given that the number of applications may increase, I'm hoping to re-deploy this on ECS with an ELB, but I'm running into problems with the path routing. My goal is to have a system where <url_name>/service-1 will route traffic to the container referenced in service-1 and so on.
Currently, the services are all in a running state, but the routing provides a bunch of console errors saying the static js and css files produced by the react build command cannot be found at <url_name>/. Has anyone found a way to run multiple full-stack React + Express applications with path routing on ELB or a workaround of adding in an Nginx service or updating the React homepage to a fixed value?
# container_definition
[
{
"name": "service-1",
"image": "<image-name>:latest",
"cpu": 256,
"memory": 256,
"portMappings": [
{
"containerPort": 3001,
"hostPort": 3001
}
],
"essential": true
}
]
# rule.json
{
"ListenerArn": "placeholder",
"Conditions": [
{
"Field": "path-pattern",
"Values": [
"/service-1*"
]
}
],
"Priority": 1,
"Actions": [
{
"Type": "forward",
"TargetGroupArn": "placeholder"
}
]
}
# server.js
const express = require('express'),
path = require('path');
const createServer = function () {
const port = 3001;
// Create a new Express server
const app = express(),
subpath = express();
// Ensure every api route is prefixed by /api
app.use('/api', subpath);
// All routes related to access points
const users = require('./routes/users');
subpath.get('/users/:id', users.getUserById);
// serve up static assets, i.e. HTML, CSS, and JS from /build
app.use(express.static('build'));
if (process.env.NODE_ENV)
app.get('*', (req, res) => res.sendFile(path.join(__dirname + '/build/index.html')));
// Start the HTTP listener using the given port
return app.listen(port, () => console.log(`Express server (HTTP) listening on port ${port}`))
};
module.exports = createServer;

heroku django channel app runs perfectly fine for sometime and then keeps giving 503 error

I am following this article - https://blog.heroku.com/in_deep_with_django_channels_the_future_of_real_time_apps_in_django
Has it something to do with the fact that I have only one worker running (free tier) heroku ps:scale web=1:free worker=1:free as suggested in the article ?
So now it is up again..and I guess it will go down again..
CHANNEL_LAYERS = {
"default": {
"BACKEND": "asgi_redis.RedisChannelLayer",
"CONFIG": {
"hosts": [os.environ.get('REDIS_URL', 'redis://localhost:6379')],
},
"ROUTING": "malang.routing.channel_routing",
},
}

Setting up ElastiCache Redis with Elastic BeanStalk + Django

Another stackoverflow answer says you need to set up a elasticache.config file to create Redis servers with ElastiCache automatically.
However, can I just create a Redis instance on AWS (Elasticache) and add its endpoint into Django settings? Eg, with Django-redis:
CACHES = {
"default": {
"BACKEND": "django_redis.cache.RedisCache",
"LOCATION": "redis://<REDIS AWS ENDPOINT AND PORT HERE>",
"OPTIONS": {
"CLIENT_CLASS": "django_redis.client.DefaultClient",
}
}
}
I suspect the above could cause trouble with multiple beanstalk server instances. Given this, I am tempted to use MemCache and not Redis, given that there is a Django package written explicitly for interfacing with AWS Elasticache for Memcache: django-elasticache.
Thanks,
Andy.
Short answer: yes.
Long answer: I have not used Elastic Beanstalk, however I can confirm that if you create a Redis instance (that is: cluster mode disabled) in ElastiCache it will work fine with django-redis. Just insert the primary_endpoint into the Django config you posted.
N.B. If you plan to use read replicas, set it up like this:
CACHES = {
"default": {
"BACKEND": "django_redis.cache.RedisCache",
"LOCATION": [
"redis://<MASTER ENDPOINT>",
"redis://<SLAVE ENDPOINT>",
]
"OPTIONS": {
"CLIENT_CLASS": "django_redis.client.DefaultClient",
}
}
}
If you spin up a Redis cluster however, you cannot use vanilla django-redis. You'll have to use use redis-py-cluster with it as described in this post. Replicated here:
CACHES = {
'default': {
'BACKEND': 'django_redis.cache.RedisCache',
'LOCATION': 'redis://XXX.YYY.ZZZ.cache.amazonaws.com/0',
'OPTIONS': {
'REDIS_CLIENT_CLASS': 'rediscluster.RedisCluster',
'CONNECTION_POOL_CLASS': 'rediscluster.connection.ClusterConnectionPool',
'CONNECTION_POOL_KWARGS': {
'skip_full_coverage_check': True # AWS ElasticCache has disabled CONFIG commands
}
}
}
}