ElasticCache - What is the difference between the configuration and node endpoint? - amazon-web-services

ElasticCache gives you both a configuration end point, and an individual node endpoint.
What is really the difference between the two? And a use case that you'd use one versus the other?
I assume configuration end point could point to a group of node endpoints, but I don't really quite get it. A use case example would really help me understand when you'd want to use the 2 differently.

As per my understanding, Node endpoint is associated to particular node which is present in cluster, and Configuration endpoint is for cluster management. Each Node endpoint is connected to Configuration endpoint to get details about other nodes present in that cluster.
The configuration endpoint DNS entry contains the CNAME entries for each of the cache node endpoints; thus, by connecting to the configuration endpoint, you application immediately knows about all of the nodes in the cluster and can connect to all of them.You do not need to hard code the individual cache node endpoints in your application.
For more information on Auto Discovery, see Node Auto Discovery (Memcached).

My understanding of the AWS docs on this topic is that the configuration endpoint is what you need if you have multiple nodes. It looks like you would plug the configuration endpoint URL into their cache client software which download from your elasticache AWS management console (looks only available in Java and PHP at the moment).
If you just have one node then the node endpoint is the one you use with memcache, which with PHP looks like this:
$memcache = memcache_connect('yourECname.tvgtaa.0001.use1.cache.amazonaws.com', 11211);
http://www.php.net/manual/en/memcache.connect.php
p.s. once you download the the cache client, within it it has a link for installation directions which seem pretty self-explanatory: http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/Appendix.PHPAutoDiscoverySetup.html

Related

Node is not able to connect to Hub, keep sending registration event

Objective: UI test execution takes quite a time and we have a lot of UI test cases, currently we have a grid setup on AWS EC2 but scaling and descaling of resources manualy is time-consuming, so we decided to explore AWS ECS Fargate where we can scale based on CPU and Memory utilization.
Motivation blog: https://aws.amazon.com/blogs/opensource/run-selenium-tests-at-scale-using-aws-fargate/
Problem Statement: Node is initiating registration requests but it is not able to register itself to the hub.
Findings till now: I found a repo on git which is doing what we are trying to achieve except for one thing, that is in version 3.141.59 and we want the version 4.4.0-20220831
What I can achieve: So using this repo I changed the version of Hub and Node to 4.4.0-20220831 and also changed environment variables according to the specific version requirements, on the execution of cloudFormation template Hub is up and running but there was no node connected when I checked the log of hub and node, I found hub service was configured and running as well as the node service was sending registration requests for N times.
This is my first question here so I am not able to show images in question itself, sorry for inconveniance.
HUB Screenshots
Hub environment
Hub service discovery
Hub logs
Node Screenshots
Node environment
Node service discovery
Node logs
Before changing anything everyting is working as expected on V3 but we need V4.
Thank you for gving your valuable time, looking forward for you response.
Thank you once again.
The problem is not with any of these resources, when I allowed ports 4442 and 4443 in my security group it worked.
Thank you everyone for your time and support.

Dynamic Stage Routing / Multi-Cluster Setup with Fargate

I'm having a fargate cluster with a service having two containers:
a container running nginx for terminating mTLS (it accepts a defined list of CAs) and forwarding calls to the app container with the DN of the client certificate
a Spring App running on tomcat which does fine-grained authorization checks (per route & HTTP method) based on the incoming DN via a filter
The endpoints from nginx are exposed to the internet via a NAT gateway.
Infrastructure is managed via terraform and rolling out a new version is done via a task definition replacement which then points to the new images in ECR. ECS takes care and starts the new containers and then switches the DNS to those within 5 to 10 minutes.
Problems with this setup:
I can't do canary or blue/green deployments
If the new app version has issues (app is not able to start, we have huge error spikes, ...) the rollback will take a lot of time.
I can't test my service integrated without applying a new version and therefore probably breaking everything.
What I'm aiming for is some concept with multiple clusters and a routing based on a specific header. So that I can spin up a new cluster with my new app version and the traffic will not be routed to this version until I either a) send a specific header or b) completely switch to the new version with for example a specific SSM parameter.
Basically the same you can do easily on CloudFront with Lambda#Edge for static frontend deployments (using multiple origin buckets and switching the origin with lambda based on the incoming request).
As I'm having the requirement for mTLS and those fine-grained authorisations I'm neither able to use a standard ALB nor API Gateway.
Are there any other smart solutions for my requirements?
To solve this question finally, we wen't on to replicate the task definitions (xxx-blue and xxx-green) & ELBs and creating two different A records. The deployment process:
find out which task definition is inactive by checking the weights of both CNAMES (one will have 0% weight)
replacing the inactive definition containing the new images at ECR.
waiting for apps to become healthy
switching the traffic via the CNAME records to ELB of the replaced task definition
running integration tests and verifying that there are no log anomalies
(Manually triggered) Setting the desired tasks at the other task definition to zero to scale the old version down. Otherwise, if there is unexpected behaviour the A records can be used to switch the traffic back to the ELB of the old task.
What we didn't achieve with this: having client-based routing to different tasks.

Clustering WSO2 EI ESB and WSO2 EI MB profiles, WKA vs Multicast? Are my assumptions correct?

I have to create and configure a two node WSO2 EI cluster. In particular I have to cluster an ESB profile and MB profile.
I have some architectural doubts about this:
CLUSTERING ESB PROFILE DOUBTS:
I based my assumptions on this documentation: https://docs.wso2.com/display/EI640/Clustering+the+ESB+Profile
I found this section:
Note that some production environments do not support multicast.
However, if your environment supports multicast, there are no issues
in using this as your membership scheme
What could be the reason for not supporting multicast? (so I can inform about possible issues with it). Looking into the table (inside the previous link) it seems to me that possible problem could be related to the following points:
All nodes should be in the same subnet
All nodes should be in the same multicast domain
Multicasting should not be blocked
Is obtaining this information from system\network engineers enough to decide whether to proceed with the multicast option?
Using multicast instead of WKA, would I need to do the same configuration steps listed in the first deployment scenario (the WKA based one) related to the "mounting registry" and "creating\connecting to databases" (as shown in the first documentation link)?
Does using Multicast instead of WKA allow me to not stop the service when I add a new node to the cluster?
CLUSTERING MB PROFILE:
From what I understand, MB profile cluster can use only WKA as membership scheme.
Does using WKA mean that I have to stop the service when I add a new node to the cluster?
So at the end can we consider the ESB cluster and the MB cluster two different clusters? Does the ESB cluster (if it is configured using multicast) need the service to be stopped when a new node is added while the MB cluster is stopped to add a new one?
Many virtual private cloud networks, including Google Cloud Platform,
Microsoft Azure, Amazon Web Services, and the public Internet do not
support multicast. Because such a platform does not support multicast.
If you configure wso2 products with multicast as the membership shceam it will not work as expected. That is the main reason for the warning in the official documentation.
You can consider the platform capability and chose any of the following membership schemes when configuring Hazalcast clustering in WSO2 Products.
WKA
Multicast
AWS
Kubernetes
Other than WKA the rest of the options for membership schema does not require you to include all the IPs of the member's in the configuration. So newly introduced nodes can join the cluster with ease.
Even in the WKA membership scheme if you have at least one known member active you can join a new member to the cluster then follow the configuration change and restart the other services without any service interruption.
Please note with all the above membership scheme usages the rest of
the configurations related to each product are needed to successfully
complete the cluster.
Regarding your concern about Clustering the MB Profile,
You can use any of the above-mentioned membership schemas which matches your deployment environment.
Regarding the adding new members to WKA, You can maintain service availability and apply the changes to servers one by one. You only need at least one WKA member running to introduce a new member to the cluster.
WSO2 MB Profile introduces cluster coordination through an RDBMS. With this new feature by default, cluster coordination is not handled by hazelcast engine. When the cluster coordination through an RDBMS is dissabled is allow the hazelcast engine to manage cluster coordination
Please note when the RDMS coordination is used there are no server restarts required.
I hope this was helpfull.

Can not synchronize WSO2 node services

I have 2 nodes of WSO2 in cluster. Log says that both nodes connected to cluster. But each node has its own service list. The thing I want is when i configure service in one node, it must be synchonized to another one.
All things configured as in this tutorial
https://docs.wso2.com/display/CLUSTER44x/Setting+up+a+Cluster#SettingupaCluster-Configuringtheloadbalancer
I suppose you have used SVN based deployment synchronization. Could please try rsync as mentioned in [1]. This is the more recommended synchronization mechanism.

Using comparative logic in AWS DNS/Route 53 records

We have a site setup in AWS. When we bring up a stack for a new release we make it available at a versioned URL. i.e.
V1 available at v1.mysite.com
V2 available at v2.mysite.com
etc
Is it possible to make a single DNS entry that will point to the latest deployed version of my site automatically? So, after I deploy V1, I would have two DNS entries:
v1.mysite.com which goes to the IP of it's stack
mysite.com which redirect to v1.mysite.com
Then when I deploy V2, mysite.com now redirects to v2.mysite.com without me manually having to edit the DNS entry.
In general, can I automatically make DNS entries or make some kind of wildcarded DNS entry that will always point to the highest numbered version of my site currently available in AWS? It should look at the digits after the V for all currently available DNS entries/stacks and make mysite.com point to the numerically highest one.
We are using CloudFormation to create our stacks and our DNS (Route 53) entries, so putting any logic in those scripts would work as well.
This isn't part of DNS itself, so it's unlikely to be supported by anything on Route53. Your best bet is a script that runs when your new instance starts or is promoted to be the production instance. It's pretty simple using boto:
Create a new boto.route53.record.Record
Create a new boto.route53.record.ResourceRecordSets
Add a change record with the action UPSERT and your record
Commit the ResourceRecordSets (with a simple retry in case it fails)
get_change() until Route53 replies INSYNC
Depending on your application you may also want to wait for all the authoritative DNS servers (dns.resolver.query('your-domain', 'NS')) at Amazon to know about your change.
We ended up must making this a manual step before deploying a new stack. If the new stack needed to be resovled at mysite.com, the deployer has to manually remove the existing mapping. Then the cloud formation scripts will create the new DNS mapping.
Not ideal but better than a ton of messy logic in cloud formation scripts I suppose.