I have Spring Boot app which connects to Redis cluster, using Jedis Connection Factory:
RedisClusterConfiguration redisClusterConfiguration = new RedisClusterConfiguration(redisProperties.getCluster().getNodes());
redisClusterConfiguration.setPassword(redisProperties.getPassword());
jedisConnectionFactory = new JedisConnectionFactory(redisClusterConfiguration);
and reading list of nodes from application.yml:
spring:
redis:
host: 127.0.0.1
port: 6379
timeout: 300s
cluster:
nodes: 127.0.0.1:6380,127.0.0.1:6381,127.0.0.1:6382
Now we want to switch to Elasticache since we are hosting our Redis cluster on AWS anyway.
It would be done quite easily. If AmazonElastiCache lib could be used.
Then we could just connect to Elasticache cluster with AWS credentials, pull available nodes put it in the list and pass it to Jedis instead hardcoding them in application.yml, like:
//get cache cluster nodes using AWS api
private List<String> getClusterNodes(){
AmazonElastiCache client = AmazonElastiCacheClientBuilder.standard().withRegion(Regions.DEFAULT_REGION).build();
DescribeCacheClustersRequest describeCacheClustersRequest = new DescribeCacheClustersRequest();
describeCacheClustersRequest.setShowCacheNodeInfo(true);
List<CacheCluster> cacheClusterList = client.describeCacheClusters(describeCacheClustersRequest).getCacheClusters();
List<String> nodeList = new ArrayList<>();
try {
for (CacheCluster cacheCluster : cacheClusterList) {
for(CacheNode cacheNode :cacheCluster.getCacheNodes()) {
String nodeAddr = cacheNode.getEndpoint().getAddress() + ":" +cacheNode.getEndpoint().getPort();
nodeList.add(nodeAddr);
}
}
}
catch(Exception e) {
e.printStackTrace();
}
return nodeList;
}
But DevOps team said that they can't configure AWS access on all labs and they have reasons for it. Also instead of connecting to AWS and pulling all available clusters we need to connect to specific one by URL.
So I tried to pass Elasticache cluster url directly to Jedis as standalone and as a cluster in application.yml configuration.
In both cases connection is established, but when App tries to write to Elasticache its gets MOVED exception:
org.springframework.web.util.NestedServletException: Request processing failed; nested exception is org.springframework.data.redis.ClusterRedirectException: Redirect: slot 1209 to 10.10.10.011:6379.; nested exception is redis.clients.jedis.exceptions.JedisMovedDataException: MOVED 1209 10.10.10.102:6379
Which as I understand means that App tried to write to one of the nodes in Elasticache, but wasn't able to connect.
So the question would be is there a way to connect to Elasticache Redis cluster from Spring Boot app using only Elasticache cluster URL?
I know that it's doable if Elasticache Memecache is used.
Also Jedis driver is not a hard requirement.
Thank you.
After some research we learned that if AWS Elasticache cluster end-point is set as a node in RedisClusterConfiguration then driver (Jedis or Lettuce) is able to connect and find all the nodes in a Elasticache cluster. Also if one of the nodes goes down driver is able to communicate with Elasticache cluster through some other node.
We migrated to Lettuce driver while working on this upgrade as well, since Lettuce is default driver provided in Spring Boot Redis Started and supports latest Redis versions. Lettuce connections are designed to be thread-safe too, Jedis not.
Code example:
List<String> nodes = Collections.singletonList("****.***.****.****.cache.amazonaws.com:6379");
RedisClusterConfiguration clusterConfiguration = new RedisClusterConfiguration(nodes);
return new LettuceConnectionFactory(clusterConfiguration);
Inspired from Above Answer:, complete more detailed code
List<String> nodes = Collections.singletonList("<cluster-host-name>:<port>");
RedisClusterConfiguration clusterConfiguration = new RedisClusterConfiguration(nodes);
ClusterTopologyRefreshOptions topologyRefreshOptions = ClusterTopologyRefreshOptions.builder().closeStaleConnections(true)
.enableAllAdaptiveRefreshTriggers().build();
ClusterClientOptions clusterClientOptions = ClusterClientOptions.builder().autoReconnect(true)
.topologyRefreshOptions(topologyRefreshOptions).validateClusterNodeMembership(false)
.build();
//If you want to add tuning options
LettuceClientConfiguration lettuceClientConfiguration = LettuceClientConfiguration.builder().readFrom(ReadFrom.REPLICA_PREFERRED).clientOptions(clusterClientOptions).build();
LettuceConnectionFactory lettuceConnectionFactory = new LettuceConnectionFactory(clusterConfiguration, lettuceClientConfiguration);
lettuceConnectionFactory.afterPropertiesSet();//**this is REQUIRED**
StringRedisTemplate redisTemplate = new StringRedisTemplate(lettuceConnectionFactory);
Related
I'm trying to create AWS Cloudwatch alarm for systemCpuUtilizaiton of each RabbitMQ broker
nodes via Terraform. To create the AWS Cloudwatch alarm, I need to provide dimensions (node-name and broker) as mentioned in AWS docs.
Hence, I'm looking to fetch the rabbitMQ broker node-names from AWS (via CLI, or API or Terraform)
Please note: I'm able to see the matrices of each broker nodes in AWS Cloudwatch console, but not from API, SDK or CLI.
I went through the below links but didn't get anything handy https://awscli.amazonaws.com/v2/documentation/api/latest/reference/mq/index.html#cli-aws-mq
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/mq_broker
Please let me know in case I'm missing something.
Recently, AWS started publishing CPU/Mem/Disk metrics per Broker.
You should see these metrics under AmazonMQ/Broker metrics. You can now use the SystemCpuUtilization metric without a node name dimension and then take the Maximum statistic to get the most overloaded node. You can create a CloudWatch alarm based on this metric.
The AWS MQ node names used for the cloudwatch dimensions do not appear to be exposed through the AWS API, but the node name is predictable knowing the IP address. I believe this can be used to construct valid node names for alarms.
data "aws_region" "current" {}
resource "aws_mq_broker" "example" {
...
}
resource "aws_cloudwatch_metric_alarm" "bat" {
for_each = toset([
for instance in aws_mq_broker.example.instances : instance.ip_address
])
alarm_name = "terraform-test-foobar5"
comparison_operator = "GreaterThanOrEqualToThreshold"
evaluation_periods = "2"
metric_name = "SystemCpuUtilization"
namespace = "AWS/AmazonMQ"
period = "120"
statistic = "Average"
threshold = "80"
dimensions = {
Broker = aws_mq_broker.example.name
Node = "rabbitmq#ip-${replace(each.value, ".", "-")}.${data.aws_region.current.name}.compute.internal"
}
}
I have raised the above-mentioned problem to AWS support, below is the solution:
First of all response from AWS team, AmazonMQ-RabbitMQ broker nodes are managed internally by AWS and currently its not exposed via API or SDK.
As a result there is NO way to fetch the Rabbit MQ broker node name via API or SDK. Hence its not possible to directly create cloudwatch alarm on Rabbit MQ broker node's systemCpuUtilizaiton, as node name are required dimensions for creating the alert.
There are two alternative solutions
Query RabbitMQ API to fetch the node-name
Use prometheus/cloudwatch-exporter, to fetch the matrices details from cloud watch where node names are available.
I have used the second method, below values file to fetch the matrices we are interested
prometheus-cloudwatch-exporter:
namespace: monitoring
enabled: true
override:
metrics:
alb: false
rds: false
# ... based on requirement
alerts:
ec2: false # based on requirement
additionalMetrics: |-
# below configuration will fetch the martics,
# containing Rabbit MQ broker node names
- aws_namespace: AWS/AmazonMQ
aws_metric_name: SystemCpuUtilization
aws_dimensions: [Broker, Node]
aws_statistics: [Average]
If everything is configured correctly, you should be able to aws_amazonmq_system_cpu_utilization_average martic in prometheus as shown below. Now use Prometheus Alert manager to create alerts on top of this matrics.
I am trying to implement consul-agent and proxy as sidecar container inside my ECS fargate service. So, inside the task, there will be 3 containers running:
core-business-service-container
consul-agent-container
core-business-consul-proxy-container
All containers are up and running on ECS task. The node has registered in the consul UI as well. But, the service is not up in the consul UI.
I dig in and found the log of 'consul-agent-container', Here is the error log:
2021/03/12 03:33:14 [ERR] http: Request PUT
/v1/agent/check/pass/greeting-fargate-proxy-ttl?note=, error: CheckID
"greeting-fargate-proxy-ttl" does not have associated TTL
from=127.0.0.1:43252
Here are the commands I used to connect consul.
consul-agent-container:
"exec consul agent -ui -data-dir /consul/data -client="127.0.0.1"
-bind="{{ GetPrivateIP }}" -retry-join "172.31.79.139""
core-business-consul-proxy-container:
"exec consul connect proxy -register -service greeting-fargate
-http-addr 127.0.0.1:8500 -listen 127.0.0.1:8080 -service-addr 127.0.0.1:3000"
HashiCorp recently announced support for running Consul service mesh on ECS using Terraform to deploy the agent and sidecar components. You might want to consider this as an alternative solution to your existing workflow.
This solution is currently in tech preview. You can find more information in the blog post https://www.hashicorp.com/blog/announcing-consul-service-mesh-for-amazon-ecs.
I did not manage to get my proxy to work using the same method as you were using. But I remember reading somewhere that you should declare your Connect proxy inside the service registration config
{
"service": {
"name": "web",
"port": 8080,
"connect": { "sidecar_service": {} }
}
}
After you have done that I think you could just launch your proxy using:
consul connect proxy -sidecar-for <service-id>
I did not verify this because the application I was using used Spring Cloud Consul to register the services and I did not find where to register a proxy, but maybe this can help you further.
I've set up an Aurora PostgreSQL compatible database. I can connect to the database via the public address but I'm not able to connect via a Lambda function which is placed in the same VPC.
This is just a test environment and the security settings are weak. In the network settings I tried to use "no VPC" and I tried my default VPC where the database and the lambda are placed. But this doesn't make a difference.
This is my nodejs code to create a simple Select statement:
var params = {
awsSecretStoreArn: '{mySecurityARN}',
dbClusterOrInstanceArn: 'myDB-ARN',
sqlStatements: 'SELECT * FROM userrole',
database: 'test',
schema: 'user'
};
const aurora = new AWS.RDSDataService();
let userrightData = await aurora.executeSql(params).promise();
When I start my test in the AWS GUI I get following errors:
"errorType": "UnknownEndpoint",
"errorMessage": "Inaccessible host: `rds-data.eu-central- 1.amazonaws.com'. This service may not be available in the `eu-central-1' region.",
"trace": [
"UnknownEndpoint: Inaccessible host: `rds-data.eu-central-1.amazonaws.com'. This service may not be available in the `eu-central-1' region.",
I've already checked the tutorial of Amazon but I can't find a point I did't try.
The error message "This service may not be available in the `eu-central-1' region." is absolutely right because in eu-central-1 an Aurora Serverless database is not available.
I configured an Aurora Postgres and not an Aurora Serverless DB.
"AWS.RDSDataService()", what I wanted to use to connect to the RDB, is only available in relation to an Aurora Serverless instance. It's described here: https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/RDSDataService.html.
This is the reason why this error message appeared.
I tried to connect to elasticache to put data but I have not found a method to perform put the data.
How can I put and get data on elasticache resdis of aws?
my code
mySession := getAWSSession()
svc := elasticache.New(mySession)
input := &elasticache.CreateCacheClusterInput{
AutoMinorVersionUpgrade: aws.Bool(true),
CacheClusterId: aws.String("my-redis"),
CacheNodeType: aws.String("cache.r3.larage"),
CacheSubnetGroupName: aws.String("default"),
Engine: aws.String("redis"),
EngineVersion: aws.String("3.2.4"),
NumCacheNodes: aws.Int64(1),
Port: aws.Int64(6379),
PreferredAvailabilityZone: aws.String("us-east-1c"),
SnapshotRetentionLimit: aws.Int64(7),
}
result, err := svc.CreateCacheCluster(input)
var data = Logo{}
data.name = "test1"
data.logo = "test2"
// how to put and get data from elasticache
This Go SDK that you are using provides APIs for managing your ElastiCache infrastructure, such as create/delete clusters or snapshots, add tags, purchase cache nodes etc. It doesn't provide APIs to put or get items inside the cache.
The Redis cluster that ElastiCache gives you is similar to the one that you might have installed on your own. So, you can connect it with the usual Go libraries outside AWS SDK. For e.g., go-redis/redis or garyburd/redigo.
In short, use the AWS SDK to manage your ElastiCache infrastructure and Redis' Go clients to put or get items from the cache.
I am trying to use ElastiCache with a Spring app that is supposed to be deployed as a worker application in a worker environment on AWS.
The app has a a cron job which should run every 5 minutes and update some data on ElastiCache. The cron.yaml is defined as:
version: 1
cron:
- name: "memcache-dataset-update-job"
url: "/runcron"
schedule: "0/5 * * * *"
"/runcron" calls the following method:
#RequestMapping(method = RequestMethod.GET)
#ResponseStatus(value = HttpStatus.OK)
public void updateDataSet(){
try {
dataSet = initializeNewDataSet();
memcached = new MemcachedClient(new BinaryConnectionFactory(ClientMode.Dynamic),
AddrUtil.getAddresses(memcacheConfigEndpoint));
// Store a value (async) for one hour
memcached.set(dataSetKey, 1800, dataSetObject);
}
My question:
1. Should the request mapping be for http POST?
2. Do I need to define permissions in the IAM worker role to allow my app access to ElastiCache. If yes, how? I could not find any help here on AWS docs.
I found answer to my own questions:
1. Request mapping should be for HTTP POST method.
2. No permissions need to be defined in the IAM worker role for ElastiCache access. Just that the app should be within the same VPC as your cache cluster.