HDFS is for Big Data storage and Azure storage [closed] - hdfs

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I am using HDFS to store the data files. Want to know is there a way to use Azure storage to be used in the place of HDFS? If so how.
I am using Spark and Python.

Post an answer to end this question. As #Joel Cochran comments, you can use Azure Data Lake Gen2 which is fully HDFS compatible. You can refer to https://learn.microsoft.com/en-us/azure/storage/blobs/data-lake-storage-introduction to start it.

Related

How do I remove the 10mb limit in AWS S3? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 days ago.
Improve this question
I don't have any rules, I didn't set this restriction, but when I send a file via API - I have a restriction for some reason. I'm attaching a screenshot of the error.
I will be very grateful if you can help. Thanks
I re-created the bucket but it was the same, I deleted all the rules, also I tried setting 1 limit to 1gb but it didn't work.

Amazon RDS encryption questions [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
Since AMAZON RDS supports encrypting your database. Does anyone know after encryption, can I still query my data?
Thank you!
Amazon RDS encrypts data at rest (on disk).
Once it is read from disk, it is automatically decrypted.
Your queries will operate the same as a non-encrypted database. The encryption is transparent from SQL.

List gcp project resource types using api [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
Here I want to retrieve all the resources for a specific project or for the account using an java call (API call) (eg-buckets, storage resources etc)
There is no API call that will give you a list of all resources in a project as the different products use different API endpoints. You can however use the Cloud Asset Inventory to export all asset metadata at a certain point in time into a GCS file.

How to use Google DLP API to delete sensitive content from data stored in Google Big Query? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have a certain table in Google Big Query which has some sensitive fields. I read and understood about inspection of data but cannot find a way to redact the data using DLP API directly in BigQuery database.
Two questions:
Is it possible to do it just using DLP API?
If not, what is the best way to fix data in a table which runs into Terabytes?
The API does not yet support de-identifying bigquery directly.
You can however write a dataflow pipeline that leverages content.deidentify. If you batch your rows utilizing Table objects (https://cloud.google.com/dlp/docs/reference/rest/v2/ContentItem#Table) this can work pretty efficiently.

Amazon S3 :: redirect on noSuchKeyError [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I need to refirect on another object on S3 when noSuchKeyError (object not found).
How can I do this?
Thanks
You have provided very little information so I don't know if this advise is useful or not. However, if you configure your S3 bucket for website hosting, it is possible to define custom error pages that map to particular HTTP error codes, as explained here.