Setting up ElastiCache Redis with Elastic BeanStalk + Django - django

Another stackoverflow answer says you need to set up a elasticache.config file to create Redis servers with ElastiCache automatically.
However, can I just create a Redis instance on AWS (Elasticache) and add its endpoint into Django settings? Eg, with Django-redis:
CACHES = {
"default": {
"BACKEND": "django_redis.cache.RedisCache",
"LOCATION": "redis://<REDIS AWS ENDPOINT AND PORT HERE>",
"OPTIONS": {
"CLIENT_CLASS": "django_redis.client.DefaultClient",
}
}
}
I suspect the above could cause trouble with multiple beanstalk server instances. Given this, I am tempted to use MemCache and not Redis, given that there is a Django package written explicitly for interfacing with AWS Elasticache for Memcache: django-elasticache.
Thanks,
Andy.

Short answer: yes.
Long answer: I have not used Elastic Beanstalk, however I can confirm that if you create a Redis instance (that is: cluster mode disabled) in ElastiCache it will work fine with django-redis. Just insert the primary_endpoint into the Django config you posted.
N.B. If you plan to use read replicas, set it up like this:
CACHES = {
"default": {
"BACKEND": "django_redis.cache.RedisCache",
"LOCATION": [
"redis://<MASTER ENDPOINT>",
"redis://<SLAVE ENDPOINT>",
]
"OPTIONS": {
"CLIENT_CLASS": "django_redis.client.DefaultClient",
}
}
}
If you spin up a Redis cluster however, you cannot use vanilla django-redis. You'll have to use use redis-py-cluster with it as described in this post. Replicated here:
CACHES = {
'default': {
'BACKEND': 'django_redis.cache.RedisCache',
'LOCATION': 'redis://XXX.YYY.ZZZ.cache.amazonaws.com/0',
'OPTIONS': {
'REDIS_CLIENT_CLASS': 'rediscluster.RedisCluster',
'CONNECTION_POOL_CLASS': 'rediscluster.connection.ClusterConnectionPool',
'CONNECTION_POOL_KWARGS': {
'skip_full_coverage_check': True # AWS ElasticCache has disabled CONFIG commands
}
}
}
}

Related

ElastiCache(redis) cluster mode enabled with django

I configured ElastiCache redis with cluster mode enabled.
I want to connect ElastiCache with local Django.
So I configured bastion host.
I connected ElastiCache(non-cluster mode) with local Django. I tried cache.set(), cache.get(). It's OK.
I installed 'Django-redis-cache' and 'settings.py' is like that.
CACHES = {
'default': {
'BACKEND': 'redis_cache.RedisCache',
'LOCATION': 'localhost:6379',
}
}
But I have problem when I connect ElastiCache(cluster mode) with django.
I tried tunneling with ElastiCache configuration endpoint.
When I use the same 'settings.py', error message is like that.
'SELECT is not allowed in cluster mode'
So, I changed 'settings.py'.
CACHES = {
'default': {
'BACKEND': 'redis_cache.RedisCache',
'LOCATION': 'localhost:6379',
'OPTIONS': {
'DB': 0
},
}
}
And then, error message is like that.
'MOVED 4205 xx.xx.xx.xxx:6379'
What I have to do?
There are no example which connect ElastiCache(cluster mode) with Django.

How to log library logs to AWS CloudWatch using Serilog?

Currently, I am working on ASP.NET 6 Web API and as a logger We use Serilog to log all the necessary details to cloudwatch and it's working fine. Now I need to add library logs such as AWS errors to cloudwatch. Currently there is an option for that in config file but it only saves logs as a file which results a No space left on device : '/app/Logs/serilog-aws-errors.txt' and the details in the file didn't appear on cloudwatch logs.
This is the appsettings data I use,
"Serilog": {
"Using": [ "AWS.Logger.SeriLog", "Serilog.Sinks.Console", "Serilog.Sinks.File" ],
"MinimumLevel": "Debug",
"WriteTo": [
{ "Name": "AWSSeriLog" },
{ "Name": "Console" },
{
"Name": "File",
"Args": {
"path": "Logs/webapi-.txt",
"rollingInterval": "Day"
}
}
],
"Region": "eu-west-2",
"LogGroup": "/development/serilog",
"LibraryLogFileName": "Logs/serilog-aws-errors.txt"
}
I need to know that there is a way to log the details in serilog-aws-errors.txt to AWS cloudwatch or S3 bucket.
This depends a lot on where in AWS you are trying to deploy your service. In ECS or Fargate you can log directly to the console. This would be a snippet of the container Definition:
"containerDefinitions": [
{
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/dev/ecs/my-api-logs-here",
"awslogs-region": "us-east-1",
"awslogs-stream-prefix": "ecs"
}
},
With the configuration above you only need the Serilog.Sinks.Console and everything will log without a special AWS sink. To write to the console you can just use
loggerConfiguration.WriteTo.Async(a =>
{
a.Console(new JsonFormatter());
});
When deployed to Fargate or ECS, these console logs will appear in your CloudWatch logs. No additional sink is necessary. Lambda logs have a similar setup. See: https://docs.aws.amazon.com/lambda/latest/dg/csharp-logging.html for more details.
If you want to use the Serilog.Sinks.AwsCloudWatch, it does have some nice features, but the setup is a little different. You probably won't want to log to the console or your file sink at all. Instead, you'll just log directly to CloudWatch. You'll want to set it up according to their instructions on Github: https://github.com/Cimpress-MCP/serilog-sinks-awscloudwatch. You can get this up and running in your local environment and then set your app settings up in a way that this only runs when deployed to AWS, and locally you still use the console or file settings.
var options = new CloudWatchSinkOptions
{
// the name of the CloudWatch Log group for logging
LogGroupName = logGroupName,
// the main formatter of the log event
TextFormatter = formatter,
// other defaults defaults
MinimumLogEventLevel = LogEventLevel.Information,
BatchSizeLimit = 100,
QueueSizeLimit = 10000,
Period = TimeSpan.FromSeconds(10),
CreateLogGroup = true,
LogStreamNameProvider = new DefaultLogStreamProvider(),
RetryAttempts = 5
};
// setup AWS CloudWatch client
var client = new AmazonCloudWatchLogsClient(myAwsRegion);
// Attach the sink to the logger configuration
Log.Logger = new LoggerConfiguration()
.WriteTo.AmazonCloudWatch(options, client)
.CreateLogger();

cannot connect redis sentinels on django

I'm trying to connect sentinels, but every time we got the same error
Exception: Could not connect to any sentinel
CHANNEL_LAYERS = {
"default": {
"BACKEND": "channels_redis.core.RedisChannelLayer",
"CONFIG": {
"hosts": [
{ "sentinels": [("redis-cluster.local.svc.cluster.local", 26379, )]
, "master_name": "mymaster"}
]}
},
}
I can't figure out where to put the password key and db key.
And do I need to put in the url the sentinels url's ? or service is enough?
note: when trying to connect redis/sentinels without channels we do not have any issue at all
From the channels_redis readme:
hosts
The server(s) to connect to, as either URIs, (host, port) tuples, or dicts conforming to create_connection. Defaults to ['localhost', 6379]. Pass multiple hosts to enable sharding, but note that changing the host list will lose some sharded data.
Sentinel connections require dicts conforming to create_sentinel with an additional master_name key specifying the Sentinel master set. Plain Redis and Sentinel connections can be mixed and matched if sharding.
(emphasis mine)
From what I read it doesn't seem possible to use URIs for sentinel connections, so if you want to set the db and password keys you need to add the relevant keys in the hosts list item:
CHANNEL_LAYERS = {
"default": {
"BACKEND": "channels_redis.core.RedisChannelLayer",
"CONFIG": {
"hosts": [
{
"sentinels": [
("redis-cluster.local.svc.cluster.local", 26379, )
],
"master_name": "mymaster",
"db": 0,
"password": "your_password"
}
]
}
}
}

Problem connecting redis and django in GCP

I have deployed a Redis instance using GCP Memorystore.
I also have a django app deployed using App Engine. However, I am facing problems connecting these 2. Both are deployed in the same timezone.
The package that I'm using is django_redis. When I try to login to admin page I face a connection error.
The error is:
Exception Value: Error 110 connecting to <Redis instance IP>:6379. Connection timed out.
Exception Location: /env/lib/python3.7/site-packages/redis/connection.py in connect, line 557
In settings.py I use:
CHANNEL_LAYERS = {
"default": {
"BACKEND": "channels_redis.core.RedisChannelLayer",
"CONFIG": {
"hosts": [("<Redis instance IP>", 6379)],
},
},
}
CACHES = {
"default": {
"BACKEND": "django_redis.cache.RedisCache",
"LOCATION": 'redis://<Redis instance IP>/0',
"OPTIONS": {
"CLIENT_CLASS": "django_redis.client.DefaultClient"
}
}
}
Note: With locally installed Redis and set to localhost, everything works fine.
In order to connect to Memorystore, you have to set up a VPC Network for your application, and add that connection into app.yaml into property vpc_access_connector. It's described here in docs: Connecting to a VPC network

How to cache with redis being on different server

I have an app server which holds the Django app, and another server for caching. I am thinking to use Redis for caching. How do I pass the IP of the Redis server to my Django app?
use settings.CACHES. If you are using django-redis, you can do the following:
CACHES = {
"default": {
"BACKEND": "django_redis.cache.RedisCache",
"LOCATION": "redis://127.0.0.1:6379/1",
"OPTIONS": {
"CLIENT_CLASS": "django_redis.client.DefaultClient"
},
"KEY_PREFIX": "example"
}
}