Is there any way we can migrate AWS Elastic Search service to azure elastic search? If there is then how can that be done? can someone explain me the method involve in doing this?
You will need a new cluster on Azure and after that you can migrate the data.
You can create snapshots of your indices and restore them in your elasticsearch on Azure, both clusters, the one on AWS and the one on Azure, will need to be able to read from the snapshot repository.
You can also use a logstash node to consume from your cluster on AWS and send to the cluster on Azure, also, the logstash node needs to be able to communicate with both clusters.
Related
I have an ec2 cluster in which my ES is running. I want to use AWS app sync, can I connect that as a data source to it. If so how?
Or is it tightly coupled to use with Amazon OpenSearch Service?
Hi I am not sure if anyone has come across this situation before. I have both Azure and AWS environment. I have a Spark cluster running on Azure Databricks. I have a python/pyspark script that I want to run on the Azure Databricks Spark cluster. In this script I want to write some data into a AWS Redshift cluster which I plan to do using the psycopg2 library. Where can I find the IP address of the Azure Databricks Spark cluster so that I can whitelist it in the security group of the AWS Redshift cluster. I think at the moment I cannot write to the AWS Redshift cluster because the script is running on Azure Databricks Spark cluster and the AWS Redshift cluster does not recognize this request coming from Azure Databricks Spark cluster.
I have similar use case to connect from Azure Databricks to AWS RDS. Need to whitelist the Azure Databricks IPs in the AWS Security group connected to RDS. Databricks associate cluster with Dynamic Ip so it changes each time a cluster is restarted.
I am trying to get this solution
Create a public IP address in the Azure portal
Associate a public IP address to a virtual machine
https://learn.microsoft.com/en-us/azure/virtual-network/associate-public-ip-address-vm#azure-portal
Currently getting error that I do not have permission to update the databricks associated VNet.
This is the simplest solution I could come up with.
If this doesnt work, next option is to try Site to Site Connection to set up tunnel between Azure and AWS. This would allow all the dynamic IPs to be authorised for read and write operations on AWS.
As AWS & GCP is not providing managed service for any of the modules of Redis. I am looking forward to running Redis ReJson with HA configuration on AWS.
Is it best way to set it up on EC2 with RDB backup? How EBS storage will work as i want multi AZ also auto failover.
Right now somewhere i am planning for deploy it on Kubernetes with helm chart : https://hub.helm.sh/charts/stable/redis-ha
Which one will be better option to deploy EC2 or Kubernetes ? & How data replication will work in multi-AZ if deployed using EC2 or Kubernetes?
RedisLabs provides a managed Redis with modules support on both AWS and GCP.
See: Cloud PRO https://redislabs.com/redis-enterprise-cloud/
We got multiple k8s clusters on AWS (using EKS) each on its own VPC but they are VPC peered properly to communication to one central cluster which has an elasticsearch service that will collect logs from all clusters. We do not use AWS elasticsearch service but rather our own inside kubernetes.
We do use an ingress controller on each cluster and they have their own internal AWS load balancer.
I'm getting fluentd pods on each node of every cluster (through a daemonset) but it needs to be able to communicate to elasticsearch on the main cluster. Within the same cluster I can ship logs fine to elasticsearch but not from other clusters as they need to be able to access the service within that cluster.
What is some way or best way to achieve that?
This has been all breaking new ground for me so I wanted to make sure I'm not missing something obvious somewhere.
We are planning to deploy our ElasticSearch on Amazon Web Services. I noticed that there is a plugin from ElasticSearch that allows ElasticSearch to use AWS API for the unicast discovery mechanism. ElasticSearch Cloud AWS.
My questions are:
Should I use that plugin? or it is something nice to have but not required ?
What is the effect of not using it?
You don't have to use the plugin.
If you don't then you'll have to put the addresses of the nodes in your configuration file by hand (since multicast is not available)
The ec2 plugin can also set the availability zone of instances as node attributes - this can be used to tell elasticsearch not to put primary and replica shards in the same availability zone. Again you could do this by hand