AWS Elasticache Redis as SignalR Backplane - amazon-web-services

Has anybody tried to connect AWS Elasticache Redis (cluster mode disabled) to use with SignalR? I see there are some serious configuration issues and limitations with AWS Redis.
1) We are trying to use Redis as a backplane for signalr,
//GlobalHost.DependencyResolver.UseRedis("xxxxxx.0001.use1.cache.amazonaws.com:6379", 6379, "", "Performance");
It has to be as simple as this as per docs, I get socket failure on Ping when I try to connect. (I have seen posts about this with Windows azure, but could not find any help articles with AWS)
2) Should the cluster mode have to enabled ? as with cluster mode disabled, we need to use the replica end points for reading, and signalr does not know this ?
Thanks in advance.

We finally resolved, by removing the clusters and making a standalone AWS Redis.
The other issue we had it was assigned to the wrong security group, so we had changed it to the same as our EC2 instances.
You will still need to include ":6379" while accessing the DB.
However, if you are using dependency resolver for signalr you should not include ":6379" as the access point, but if you use the redis for read and write operations using StackExchange.Redis then you need to include ":6379" in the request.

This note (https://learn.microsoft.com/en-us/aspnet/signalr/overview/performance/scaleout-with-redis) says "SignalR scaleout with Redis does not support Redis clusters.".
Also, perhaps remove ":6379" from the server and only have 6379 in the port?

Related

How to get details of metrics at GCP Redis

I have created a managed Redis at GCP. from GCP console I can view metrics like connected clients/ blocked clients; memory usage/max memory, etc.
How can I get these metrics results using gcloud CLI? Furthermore, how can I get more details of each connection like <client_ip:port> the Redis connects to, etc.?
Not sure if it will help you but check the official documentation
To see each connection Redis connects to, I believe it's not possible from gcloud, only using unix programs like netstat inside VM.

How to access redis logs on AWS ElastiCache

We have been facing latency issues with our redis lately.
We are trying to debug what's going on, I came across this post and it mentioned going over the redis logs to investigate how often the db is saved in the background (ie using bgsave)
I did some research on how to access the redis logs file but couldn't find anything on how to find it on AWS ElastiCache. I also tried running the monitor command from the redis cli but it's not giving me information about stuff like backing up the database etc.
How can I access such logs?
Apparently, there is no way to access to the Redis server-side logs ('yet').
src: https://forums.aws.amazon.com/thread.jspa?threadID=219210
This feature is available starting with version 6 Redis.
You can now publish logs from your Amazon ElastiCache for Redis clusters to CloudWatch and Kinesis Data Firehose, by enabling slow logs in ElastiCache Console.
You can read more details here

AWS Lambda connection timeout to Elasticache

I am trying to make Serverless work with Elasticache. I wrote a custom CloudFormation file based on serverless-examples/serverless-infrastructure repo. I managed to put Elasticache and Lambda in one subnet (checked with the cli). I retrieve the host and the port from the Outputs, but whenever I am trying to connect with node-redis the connection times out. Here are the relevant parts:
Resources
Serverless Config
I ran into this issue as well, but with Python. For me, there were a few problems that had to be ironed out
The lambda needs VPC permissions.
The ElastiCache security group needs an inbound rule from the Lambda security group that allows communication on the Redis port. I thought they could just be in the same security group.
And the real kicker: I had turned on encryption in-transit. This meant that I needed to pass redis.RedisClient(... ssl=True). The redis-py page mentions that ssl_cert_reqs needs to be set to None for use with ElastiCache, but that didn't seem to be true in my case. I did however need to pass ssl=True.
It makes sense that ssl=True needed to be set but the connection was just timing out so I went round and round trying to figure out what the problem with the permissions/VPC/SG setup was.
As Tolbahady pointed out, the only solution was to create an NAT within the VPC.
In my case I had TransitEncryptionEnabled: "true" with AuthToken: xxxxx for my redis cluster.
I ensured that both my lambda and redis cluster belonged to the same "private subnet".
I also ensured that my "securityGroup" allowed traffic to flow on the desired ports.
The major issue I faced was that my lambda was unable to fetch data from my redis cluster. And whenever it attempted to get the data it would throw timeout error.
I used Node.Js with "Node-Redis" client.
Setting option tls: true worked for me. This is a mandatory setting if you have encryption at transit enabled.
Here is my config:
import { createClient } from 'redis';
import config from "../config";
let options: any = {
url: `redis://${config.REDIS_HOST}:${config.REDIS_PORT}`,
password: config.REDIS_PASSWORD,
socket: { tls: true }
};
const redisClient = createClient(options);
Hope this answer is helpful to those who are using Node.Js with "Node-Redis" dependency in their lambda.

Amazon ec2 set up best practices for Rails app with mysql or postgres

I have to setup ec2 for a medium rails app running on apache2, mysql, capistrano and a few background services. I would like to know what is the best practices that every developer usually does to set up his rails app. I would like to know what kind of setup that is easy to scale and can mimimic at least
auto deployment
security
regular data backup and an easy and quick way to restore the data
server recovery
fault tolerance
I am also interested in how to monitor the server status and performance and other kind of best practice would be also helpful.
ps: take into account also that my app database will grow a fast.
I think a good look into the AWS docs and in particular the architecture center would be the best place to start. However, let me address as many of your questions as I can.
Database
The easiest way to get a scalable, fault tolerant database on AWS is to use the Relational Database Service. You should read the docs and best practices to ensure you get the most out of it - ie. multiple AZs.
EC2 Servers
The most recommended way to structure your servers is to decouple them into Web Servers (serve html to users) and App Servers (application logic, usually returns json or xml etc). See this architecture example.
However, the key is to use an AutoScaling group behind an Elastic Load Balancer.
Automation
If you want to use capistrano, just install it into your servers. You could create a pre-configured AMI with it installed along with whatever else you want. Alternatively, you could install it in a deployment script. However, the most recommended method for this kind of thing is to use the AWS OpsWorks service which is Chef in the cloud.
Server Recovery & Fault Tolerance
If you use EC2 AutoScaling, if a server becomes unavailable ie. hardware fails or it stops replying to EC2 health checks, AutoScaling will automatically terminate it and launch a replacement.
With the addition of the ELB and ELB health checks, instances that stop responding to web requests can be brought out of service by the ELB.
You need to read the docs for more info on this.
Backup and Recovery
For backing up data on EBS volumes attached to EC2 instances, use EBS Snapshots. However, the best types of architectures keep EC2 instances stateless - they don't store anything except application code, if they died it wouldn't matter. In these situations all data, including user files can be stored on S3. On S3, you have a number of back up options such as Cross Region Replication and or data archiving to Glacier
Monitoring
AWS provides CloudWatch which can provide you with hypervisor visible metrics such as network in and out, CPU utilization and more. If you want to get more data, you could use custom metrics and push things like eg. memory usage. In addition to cloudwatch, you could use a server level monitoring tool.
Deployment
I recommend AWS Code Deploy.
Security
Use Security Groups to open only the ports you want users to be able to connect on. Also, use security groups to lock down important ports eg.22 to only a specific set of IPs. You can also use Network ACLS to block undesired traffic. AWS provides more information and suggestions here.
I also recommend you read this Whitepaper.

neo4j cluster on Amazon AWS

This might sound like a newbie question but I have a neo4j instance running on Amazon cloud. The instance is set to Autoscale at 80% usage. That means Amazon one the usage is reaches 80%, Amazon will create another instance on Neo4j with the same configuration and will keep adding more once this one reaches 80%..
My questions are -
1) Does this setup on Amazon means we have a cluster of neo4j in place?
2) Do I need to do anything else in order to have neo4j cluster, what I have read is that you need some tool like zookeeper to maintain the cluster..
3) Does this current setup on Amazon will have both instances as Master or will it be more like master/slave setup..
Any help, feedback, suggestions would be helpful.
Thanks in advance,
Ravi
Yes, if you are using Auto Scaling group for Neo4j you need to set a cluster. As #stefan-armbruster mentioned, you need to Neo4j Enterprise edition for that. In that case it's master/slave setup.
Neo4j has its own solution for Cluster management, instead of Zookeeper.
But with AWS and EC2 there are few open question, how to properly deploy Neo4j with Auto Scaling group.
From configuration file perspective
* You need to maintain unique clusterId for each machine in cluster.
* You need to know ip addresses/hostnames of other machines in cluster.
Neo4j Enterprise edition features clustering, see the docs on this. With some well written scripts around that to configure the new instances properly I don't see a reason why AWS autoscale should not work.