What is Equivalent of Amazon Glacier in Microsoft Azure environment? [closed] - amazon-web-services

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I am doing some research where I need to find the corresponding tool for archiving. So, what will be the equivalent of amazon glacier in the Azure environment

The Azure equivalent service to AWS S3 is Azure Storage. S3 defines several storage classes, Glacier is one of them.
Azure Storage has the concepts of different access tiers. I think the Archive tier is the closest match to S3 Glacier. Another option would be the cool tier. Which one to choose depends on the frequency the data is accessed.

Equivalent service is the Azure Storage. You can always explore from the site.
AWS to Azure services comparison

Related

Terraform or cloud formation for managing AWS server less Infrastructure [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
We are trying to create a infrastructure template that can be reused for Fargate deployments.Which tool would better fit this usecase, TerraForm or cloudformation?
In my opinionated experience:
Terraform would give you a better language (HCL) and tooling (tf backends, workspaces, terragrunt, ...), and also work in other clouds and services if you need to deploy outside fargate.
CloudFormation would give you closer integration with AWS resources and services, as it is the foundation for a wide range of products. However, composing and deploying from YAML may get complex as the system grows, leading to other tools and workarounds.
You can get the "best of both" using the Terraform Cloudformation Module and so defining resources in CloudFormation but through the Terraform Tooling. Check the gitops-blueprints repo for a reference implementation.

Refactoring legacy infrastructure on AWS [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
Can somebody please recommend some sources on how best to approach refactoring a legacy AWS infrastructure? That is, how to reduce downtime, optimally migrate data stores (such as DynamoDB or S3), etc. Thanks in advance!
There are a number of approaches you can take to do this.
AWS have a lot of great resources on "migration", as an initial thought take a look at the 6 Strategies for Migrating Applications to the Cloud. Whilst you're already in the AWS Cloud it is a great time to evaluate whether you have anything you can replace or is no longer needed.
There are a number of services that assist with migration, for migrating data stores take a look at the below 2 services which might help to migrate most of your data needs:
Database Migration Service
Data Pipeline
Other services such as S3 you would need to migrate to another S3 bucket, as buckets are uniquely named. If you want to keep the name you will need to delete the origin bucket first. If it is being served publicly try using a CloudFront distribution and then switching the origin to the new S3 bucket afterwards.
For architecting your new infrastructure take a look at the AWS Well-Architected Framework.
There are a number of migration whitepapers that AWS has also produced, some are specific to particular technologies and some are more general.

Is it a good idea to deploy a GraphQL server on Google Cloud Functions or AWS Lambda? Otherwise what are the alternatives? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
This morning, I asked me "What are the different tech stacks to deploy a GraphQL server in the cloud (AWS, GCP, etc.)?
And after some research, I found some tutorials which are using AWS Lambda. But is it a good idea to deploy a GraphQL server on Google Cloud Functions or AWS Lambda? Otherwise what are the alternatives?
Some tutorials that I have found which are using AWS Lambda :
https://serverless.com/blog/running-scalable-reliable-graphql-endpoint-with-serverless/
https://hackernoon.com/create-a-serverless-graphql-server-using-express-apollo-server-and-aws-lambda-c3850a2092b5
https://medium.com/#cody.taft/serverless-graphql-with-aws-f7b6da9d2162

AWS S3 Bucket as Windows Drive [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I wanted to map s3 bucket as windows drive. I am aware then there are tools avalible in the market to do this but i want to solve this using AWS Python BOTO. Also i am aware of moving the files from drive to s3 but i want s3 as windows drive.Please let me know some logical ideas as how to achieve this
Amazon S3 is an object storage service. It is not a filesystem or virtual disk.
The commercial utilities (eg CloudBerry Drive) create virtual disks and translate disk access to API calls to Amazon S3. Files are downloaded/uploaded and buffered on the local disk. It is quite a complex process.
You would not be able to create a similar utility using only Python. You would need to create device drivers for Windows.
Generally, it is recommended not to mount an Amazon S3 bucket as a drive. While generally okay for an initial load of data, it should not be used in a production situation because performance is not reliable.
The correct way to use Amazon S3 is to make API calls directly to the service.
See: S3 — Boto 3 documentation

How can I perform data lineage in GCP? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
When we realize the data lake with GCP Cloud storage, and data processing with Cloud services such as Dataproc, Dataflow, how can we generated data lineage report in GCP?
Google Cloud Platform doesn't have serverless data lineage offering.
Instead, you may want to install Apache Atlas on Google Cloud Dataproc and use it for data lineage.
Google Cloud Data Fusion supports lineage in the Enterprise edition. You can use DF to build and orchestrate pipelines and use Dataproc and Dataflow as the capacity for running them. Introduction to CDF lineage can be found in the documentation here: https://cloud.google.com/data-fusion/docs/tutorials/lineage
If you otherwise do not use CDF capabilities, it is a bit overkill for just lineage. Lineage capability in Google Cloud Data Catalog would be optimal at least in many of my use-cases. Unfortunately currently CDC does not support lineage. I hope it is on the product roadmap and it would support lineage in the future.