ElasticSearch AWS plugin is a must to deploy on AWS? - amazon-web-services

We are planning to deploy our ElasticSearch on Amazon Web Services. I noticed that there is a plugin from ElasticSearch that allows ElasticSearch to use AWS API for the unicast discovery mechanism. ElasticSearch Cloud AWS.
My questions are:
Should I use that plugin? or it is something nice to have but not required ?
What is the effect of not using it?

You don't have to use the plugin.
If you don't then you'll have to put the addresses of the nodes in your configuration file by hand (since multicast is not available)
The ec2 plugin can also set the availability zone of instances as node attributes - this can be used to tell elasticsearch not to put primary and replica shards in the same availability zone. Again you could do this by hand

Related

How does Amazon configure and manage the EC2 instances for RDS and similar services?

Many different AWS services use EC2 instances and you can understand that from the pricing pages.
Basically it's a multi-instance architecture (and not the more familiar multi-tenant approach that I personally use for most web applications).
When an AWS customer creates a new resource, internally AWS has to spin up a new EC2 instance, configure it, monitor its status and apply security patches and updates.
Does anyone know how do they connect to the VM to configure it?
Do they use SSH to connect or another protocol?
Or they use some kind of agent installed on the VM on first installation in order to apply the updates and changes?
Note: this question doesn't want to discuss the details of managing a database, I just want to know how AWS applies and updates the configuration of the EC2 instances when they offer a "managed" service (any service).

Is it possible to use aws managed prometheus and grafana without using eks and ec2?

I have several servers in other service providers and want to use amp and amazon managed grafana to do monitoring/alerting. But fail to find how to access amp remote write endpoint ouside aws environment. Is it impossible?

How To Migrate AWS Elastic Search To Azure Elastic search service?

Is there any way we can migrate AWS Elastic Search service to azure elastic search? If there is then how can that be done? can someone explain me the method involve in doing this?
You will need a new cluster on Azure and after that you can migrate the data.
You can create snapshots of your indices and restore them in your elasticsearch on Azure, both clusters, the one on AWS and the one on Azure, will need to be able to read from the snapshot repository.
You can also use a logstash node to consume from your cluster on AWS and send to the cluster on Azure, also, the logstash node needs to be able to communicate with both clusters.

AWS & Azure Hybrid Cloud Setup - is this configuration at all possible (Azure Load Balancer -> AWS VM)?

We have all of our cloud assets currently inside Azure, which includes a Service Fabric Cluster containing many applications and services which communicate with Azure VM's through Azure Load Balancers. The VM's have both public and private IP's, and the Load Balancers' frontend IP configurations point to the private IP's of the VM's.
What I need to do is move my VM's to AWS. Service Fabric has to stay put on Azure though. I don't know if this is possible or not. The Service Fabric services communicate with the Azure VM's through the Load Balancers using the VM's private IP addresses. So the only way I could see achieving this is either:
Keep the load balancers in Azure and direct the traffic from them to AWS VM's.
Point Azure Service Fabric to AWS load balancers.
I don't know if either of the above are technologically possible.
For #1, if I used Azure's load balancing, I believe the load balancer front-end IP config would have to use the public IP of the AWS VM, right? Is that not less secure? If I set it up to go through a VPN (if even possible) is that as secure as using internal private ip's as in the current load balancer config?
For #2, again, not sure if this is technologically achievable - can we even have Service Fabric Services "talk" to AWS load balancers? If so, what is the most secure way to achieve this?
I'm not new to the cloud engineering game, but very new to the idea of using two cloud services as a hybrid solution. Any thoughts would be appreciated.
As far as I know creating multiregion / multi-datacenter cluster in Service Fabric is possible.
Here are the brief list of requirements to have initial mindset about how this would work and here is a sample not approved by Microsoft with cross region Service Fabric cluster configuration (I know this are different regions in Azure not different cloud provider but this sample can be of use to see how some of the things are configured).
Hope this helps.
Based on the details provided in the comments of you own question:
SF is cloud agnostic, you could deploy your entire cluster without any dependencies on Azure at all.
The cluster you see in your azure portal is just an Azure Resource Screen used to describe the details of your cluster.
Your are better of creating the entire cluster in AWS, than doing the requested approach, because at the end, the only thing left in azure would be this Azure Resource Screen.
Extending the Oleg answer, "creating multiregion / multi-datacenter cluster in Service Fabric is possible." I would add, that is also possible to create an azure agnostic cluster where you can host on AWS, Google Cloud or On Premises.
The only details that is not well clear, is that any other option not hosted in azure requires an extra level of management, because you have to play around with the resources(VM, Load Balancers, AutoScaling, OS Updates, and so on) to keep the cluster updated and running.
Also, multi-region and multi-zone cluster were something left aside for a long time in the SF roadmap because it is something very complex to do and this is why they avoid recommend, but is possible.
If you want to go for AWS approach, I guide you to this tutorial: Create AWS infrastructure to host a Service Fabric cluster
This is the first of a 4 part tutorial with guidance on how you can Setup a SF Cluster on AWS infrastructure.
Regarding the other resources hosted on Azure, You could still access then from AWS without any problems.

aws kubernetes inter cluster communication for shipping fluentd logs

We got multiple k8s clusters on AWS (using EKS) each on its own VPC but they are VPC peered properly to communication to one central cluster which has an elasticsearch service that will collect logs from all clusters. We do not use AWS elasticsearch service but rather our own inside kubernetes.
We do use an ingress controller on each cluster and they have their own internal AWS load balancer.
I'm getting fluentd pods on each node of every cluster (through a daemonset) but it needs to be able to communicate to elasticsearch on the main cluster. Within the same cluster I can ship logs fine to elasticsearch but not from other clusters as they need to be able to access the service within that cluster.
What is some way or best way to achieve that?
This has been all breaking new ground for me so I wanted to make sure I'm not missing something obvious somewhere.