Fail back to AWS from Azure Site Recovery - amazon-web-services

We a implementing a solution where we are replicating EC2 Instances (VMs) from AWS to Azure using Azure Site Recovery. Please note we are not migrating to Azure and would only want to set up the replication from AWS to Azure for Disaster Recovery purposes.
As per the article -
AWS instances are treated like physical, on-premises computer.
Once we have setup everything like enabling replication, we should be able see the replicated VMs in Replicated Item on ASR. As per my understanding, when we run a failover, Azure VMs are created from replicated data in Azure storage.
Now when the primary AWS site is available again, what happens when we want to fail back to AWS.
As we are treating/considering AWS Instances as Physical, as per the article
"Failback from Azure to an on-premises physical server isn't supported. You can only fail back to a VMware virtual machine”
Now the question is ‘Will we be able to Failback to the primary AWS site like the way we can fail back to VMware’?

I answered your query #https://learn.microsoft.com/en-us/answers/questions/51568/fail-back-to-aws-from-azure-site-recovery.html#answer-51979
Quoting from the link:
Only failover is supported for AWS machines. As you have already mentioned in quotes above, failback is not supported for AWS machines using Azure Site Recovery.
As long as Azure receives data from AWS machines, it will process it to make recovery points and those points would be available for you to failover to Azure enabling business continuity. In your case specifically, you can failover to Azure but going back to AWS will not be an option through Azure Site Recovery.

Related

Multi-cloud solution for data platforms on hybrid and multi-cloud using Anthos

Google Cloud Platform has made hybrid- and multi-cloud computing a reality through Anthos which is an open application modernization platform. How does Anthos work for distributed data platforms?
For example, I have my data in Teradata On-premise, AWS Redshift and Azure Snowflake. Can Anthos joins all datasets and allow users to query or perform reporting with low latency? What is the equivalent of GCP Anthos in AWS and Azure?
Your question is wide. Anthos is designed for managing and distributing container accross several K8S cluster.
For a simpler view, imagine this: you have the Anthos master, and its direct node are K8S masters. If you ask Anthos Master to deploy a pod on AWS for example. Anthos master forward the query to K8S master deployed on EKS, and your pod is deployed on AWS.
Now, rethink your question: what about the data? Nothing magic, if your data are shared across several clusters you have to federate them with a system designed for this. It's quite similar than with only one cluster and with data on different node.
Anyway, you point here the real next challenge of multi-cloud/hybrid deployment. Solutions will emerge from this empty space.
Finally your last point: Azure and AWS equivalent. There isn't.
The newest Azure ARC seems to be light: it only allow to manage VM out of Azure Platform with an agent on it. Nothing as manageable as Anthos. for example: You have 3 VM on GCP and you manage them with Azure ARC. You deployed on each an NGINX and you want to set up a loadbalancer in from of your 3 VM. I don't catch how you can do this with Azure ARC. With Anthos, it's simply a service exposition of K8S -> The Loadbalancer will be deployed according with the cloud platform implementation.
About AWS, outpost is an hardware solution: you have to buy AWS specific hardware and to plug it in your OnPrem infrastructure. Need more investment on prem in your move to cloud strategy? Hard to convince. And not compliant with other cloud provider. BUT ReInvent is coming next month. Maybe an outsider?

What are strategies for bridging Google Cloud with AWS?

Let's say a company has an application with a database hosted on AWS and also has a read replica on AWS. Then that same company wants to build out a data analytics infrastructure in Google Cloud -- to take advantage of data analysis and ML services in Google Cloud.
Is it necessary to create an additional read replica within the Google Cloud context? If not, is there an alternative strategy that is frequently used in this context to bridge the two cloud services?
While services like Amazon Relational Database Service (RDS) provides read-replica capabilities, it is only between managed database instances on AWS.
If you are replicating a database between providers, then you are probably running the database yourself on virtual machines rather than using a managed service. This means the databases appear just like any resource on the Internet, so you can connect them exactly the way you would connect two resources across the internet. However, you would be responsible for managing, monitoring, deploying, etc. This takes away from much of the benefit of using cloud services.
Replicating between storage services like Amazon S3 would be easier since it is just raw data rather than a running database. Also, Big Data is normally stored in raw format rather than being loaded into a database.
If the existing infrastructure is on a cloud provider, then try to perform the remaining activities on the same cloud provider.

Multiple Hashicorp Vault servers in different AZs in AWS

I have 3 Availability Zones in my AWS VPC and I would like to run Vault to connect to S3. I would like to run 3 Vault servers (one for each zone) all of them syncing to the same S3 bucket. Is this HA scenario for Vault possible?
I read that Vault doesn't support HA using S3 as the backend and might need to use Consul (which runs 3 servers by default). A bit confused about this. All I want is to run multiple Vault servers all storing/reading secrets from the same S3 bucket.
Thanks for your inputs.
Abdul
Note you could use DynamoDB to use an Amazon managed service & get HA support:
High Availability – the DynamoDB storage backend supports high availability. Because DynamoDB uses the time on the Vault node to implement the session lifetimes on its locks, significant clock skew across Vault nodes could cause contention issues on the lock.
https://www.vaultproject.io/docs/configuration/storage/dynamodb.html
There are several Storage Backends in Vault, and only some of them supports HA, like Consul. However, if a backend doesn't support HA it doesn't mean that it can't be used at all.
So, if you need to run multiple Vault istance, each one independent from the other ones, you should be able to use S3 as a Storage Backend. But if you need HA you need to use Consul, or any other backend that support HA.
Hope this help

What query to run to determine Amazon Athena version?

I'd like to determine what version of Amazon Athena I'm connected to by running a query. Is this possible? If so, what is the query?
Searching Google, SO, and AWS docs have not found an answer.
Amazon Redshift launches as a cluster, with virtual machines being used for that specific cluster. The cluster must be specifically updated between versions because it is continuously running and is accessible by only one AWS account. Think of it as software running on your own virtual machines.
From Amazon Redshift Clusters:
Amazon Redshift provides a setting, Allow Version Upgrade, to specify whether to automatically upgrade the Amazon Redshift engine in your cluster if a new version of the engine becomes available.
Amazon Athena, however, is a fully-managed service. There is no cluster to be created -- you simply provide your query and it uses the metastore to know where to find data. Think of it just like Amazon S3 -- many servers provide access to multiple AWS customers simultaneously.
From Amazon Athena – Interactive SQL Queries for Data in Amazon S3:
Behind the scenes, Athena parallelizes your query, spreads it out across hundreds or thousands of cores, and delivers results in seconds.
As a fully-managed service, there is only ever one version of Amazon Athena, which is the version that is currently available.

AWS dynamoDB vs Elastic Beanstalk. What serves my purpose better?

Parse migration guide suggests if we move over to AWS, it recommends using Elastic Beanstalk. The more I read about AWS services, I'm thinking DynomoDB is the better choice. DynamoDB and Elastic Beanstalk both use noSQL. Does anyone know the obvious difference between the two? The ability to handle many small but frequent requests is important for my project.
DynamoDB is the ultimate, scalable noSql database system. Go with Dynamo.
It handles many small requests very well.
Contrary to what the comments say, Elastic Beanstalk is NOT a web server, and it is NOT a database. Elastic Beanstalk is an AWS service that helps users quickly provision other AWS services, such as compute (think EC2) and storage (think S3 or DynamoDB) and set up monitoring and deployment of user's application on these resources. With Beanstalk you can deploy your applications and retain control over the underlying AWS resources. In your case, you might use Elastic Beanstalk do deploy a MongoDB server database to store your parse data.
DynamoDB on the other hand is a managed, distributed highly available and scalable non-relational (NoSQL) database provided as an AWS service. Dynamo is in some ways comparable to MongoDB (they can both store data and they are both non-relational) but where Mongo is a system that you have to manage and deploy yourself (perhaps with the help of Elastic Beanstalk), Dynamo is a fully managed system where you only have to worry about your application logic. In your case you'd be replacing MongoDB with DynamoDB which will free yourself to focus on your appliction instead of having to worry about maintaining MongoDB (i.e updating it and the host OS when new releases come out, etc).