I'm trying to connect sentinels, but every time we got the same error
Exception: Could not connect to any sentinel
CHANNEL_LAYERS = {
"default": {
"BACKEND": "channels_redis.core.RedisChannelLayer",
"CONFIG": {
"hosts": [
{ "sentinels": [("redis-cluster.local.svc.cluster.local", 26379, )]
, "master_name": "mymaster"}
]}
},
}
I can't figure out where to put the password key and db key.
And do I need to put in the url the sentinels url's ? or service is enough?
note: when trying to connect redis/sentinels without channels we do not have any issue at all
From the channels_redis readme:
hosts
The server(s) to connect to, as either URIs, (host, port) tuples, or dicts conforming to create_connection. Defaults to ['localhost', 6379]. Pass multiple hosts to enable sharding, but note that changing the host list will lose some sharded data.
Sentinel connections require dicts conforming to create_sentinel with an additional master_name key specifying the Sentinel master set. Plain Redis and Sentinel connections can be mixed and matched if sharding.
(emphasis mine)
From what I read it doesn't seem possible to use URIs for sentinel connections, so if you want to set the db and password keys you need to add the relevant keys in the hosts list item:
CHANNEL_LAYERS = {
"default": {
"BACKEND": "channels_redis.core.RedisChannelLayer",
"CONFIG": {
"hosts": [
{
"sentinels": [
("redis-cluster.local.svc.cluster.local", 26379, )
],
"master_name": "mymaster",
"db": 0,
"password": "your_password"
}
]
}
}
}
Related
I have deployed a Redis instance using GCP Memorystore.
I also have a django app deployed using App Engine. However, I am facing problems connecting these 2. Both are deployed in the same timezone.
The package that I'm using is django_redis. When I try to login to admin page I face a connection error.
The error is:
Exception Value: Error 110 connecting to <Redis instance IP>:6379. Connection timed out.
Exception Location: /env/lib/python3.7/site-packages/redis/connection.py in connect, line 557
In settings.py I use:
CHANNEL_LAYERS = {
"default": {
"BACKEND": "channels_redis.core.RedisChannelLayer",
"CONFIG": {
"hosts": [("<Redis instance IP>", 6379)],
},
},
}
CACHES = {
"default": {
"BACKEND": "django_redis.cache.RedisCache",
"LOCATION": 'redis://<Redis instance IP>/0',
"OPTIONS": {
"CLIENT_CLASS": "django_redis.client.DefaultClient"
}
}
}
Note: With locally installed Redis and set to localhost, everything works fine.
In order to connect to Memorystore, you have to set up a VPC Network for your application, and add that connection into app.yaml into property vpc_access_connector. It's described here in docs: Connecting to a VPC network
I have an app server which holds the Django app, and another server for caching. I am thinking to use Redis for caching. How do I pass the IP of the Redis server to my Django app?
use settings.CACHES. If you are using django-redis, you can do the following:
CACHES = {
"default": {
"BACKEND": "django_redis.cache.RedisCache",
"LOCATION": "redis://127.0.0.1:6379/1",
"OPTIONS": {
"CLIENT_CLASS": "django_redis.client.DefaultClient"
},
"KEY_PREFIX": "example"
}
}
I currently have four containerized React + Express applications (port 3001 exposed) sitting on four individual ECS instances with four different CNAMEs. They each sit behind their own nginx service as a reverse proxy.
Given that the number of applications may increase, I'm hoping to re-deploy this on ECS with an ELB, but I'm running into problems with the path routing. My goal is to have a system where <url_name>/service-1 will route traffic to the container referenced in service-1 and so on.
Currently, the services are all in a running state, but the routing provides a bunch of console errors saying the static js and css files produced by the react build command cannot be found at <url_name>/. Has anyone found a way to run multiple full-stack React + Express applications with path routing on ELB or a workaround of adding in an Nginx service or updating the React homepage to a fixed value?
# container_definition
[
{
"name": "service-1",
"image": "<image-name>:latest",
"cpu": 256,
"memory": 256,
"portMappings": [
{
"containerPort": 3001,
"hostPort": 3001
}
],
"essential": true
}
]
# rule.json
{
"ListenerArn": "placeholder",
"Conditions": [
{
"Field": "path-pattern",
"Values": [
"/service-1*"
]
}
],
"Priority": 1,
"Actions": [
{
"Type": "forward",
"TargetGroupArn": "placeholder"
}
]
}
# server.js
const express = require('express'),
path = require('path');
const createServer = function () {
const port = 3001;
// Create a new Express server
const app = express(),
subpath = express();
// Ensure every api route is prefixed by /api
app.use('/api', subpath);
// All routes related to access points
const users = require('./routes/users');
subpath.get('/users/:id', users.getUserById);
// serve up static assets, i.e. HTML, CSS, and JS from /build
app.use(express.static('build'));
if (process.env.NODE_ENV)
app.get('*', (req, res) => res.sendFile(path.join(__dirname + '/build/index.html')));
// Start the HTTP listener using the given port
return app.listen(port, () => console.log(`Express server (HTTP) listening on port ${port}`))
};
module.exports = createServer;
Another stackoverflow answer says you need to set up a elasticache.config file to create Redis servers with ElastiCache automatically.
However, can I just create a Redis instance on AWS (Elasticache) and add its endpoint into Django settings? Eg, with Django-redis:
CACHES = {
"default": {
"BACKEND": "django_redis.cache.RedisCache",
"LOCATION": "redis://<REDIS AWS ENDPOINT AND PORT HERE>",
"OPTIONS": {
"CLIENT_CLASS": "django_redis.client.DefaultClient",
}
}
}
I suspect the above could cause trouble with multiple beanstalk server instances. Given this, I am tempted to use MemCache and not Redis, given that there is a Django package written explicitly for interfacing with AWS Elasticache for Memcache: django-elasticache.
Thanks,
Andy.
Short answer: yes.
Long answer: I have not used Elastic Beanstalk, however I can confirm that if you create a Redis instance (that is: cluster mode disabled) in ElastiCache it will work fine with django-redis. Just insert the primary_endpoint into the Django config you posted.
N.B. If you plan to use read replicas, set it up like this:
CACHES = {
"default": {
"BACKEND": "django_redis.cache.RedisCache",
"LOCATION": [
"redis://<MASTER ENDPOINT>",
"redis://<SLAVE ENDPOINT>",
]
"OPTIONS": {
"CLIENT_CLASS": "django_redis.client.DefaultClient",
}
}
}
If you spin up a Redis cluster however, you cannot use vanilla django-redis. You'll have to use use redis-py-cluster with it as described in this post. Replicated here:
CACHES = {
'default': {
'BACKEND': 'django_redis.cache.RedisCache',
'LOCATION': 'redis://XXX.YYY.ZZZ.cache.amazonaws.com/0',
'OPTIONS': {
'REDIS_CLIENT_CLASS': 'rediscluster.RedisCluster',
'CONNECTION_POOL_CLASS': 'rediscluster.connection.ClusterConnectionPool',
'CONNECTION_POOL_KWARGS': {
'skip_full_coverage_check': True # AWS ElasticCache has disabled CONFIG commands
}
}
}
}
I want to reuse the loppback's User login and token logic for my app, and to store the info in mySql.
When I leave User's datasource as default (in memory db), it works fine. Explorer is there.
Now I just want to change the datasource for User, and I edit model-config.json to use my db connector:
...
"User": {
"dataSource": "db"
},
"AccessToken": {
"dataSource": "db",
"public": false
},
"ACL": {
"dataSource": "db",
"public": false
...
After I restart the server and play a bit around, it objects that some tables are not in the db:
{ Error: ER_NO_SUCH_TABLE: Table 'mydb.ACL' doesn't exist
Obviously there is no table structure to store users, acls and other stuff in mySql.
How do I get this scheme structure in my db?
Is there a script or command?
I found it myself, it's pretty easy:
https://docs.strongloop.com/display/public/LB/Creating+database+tables+for+built-in+models
Leaving the question if somebody else needs it...