Amazon RDS or S3 Bucket to store game scores? [closed] - amazon-web-services

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I am building a game using Unity and I want to track how well an individual does over a period of time. So I want to save their time, level, high score, etc. I am writing the scores to either a .txt file or .json file at the end of the game. The game will be deployed to Android OS (maybe IOs). I want the file to be sent off before the game returns to the home menu.
I wanted to know what is the better option for collecting the game data. Amazon RDS or S3 Bucket?

If it's a Text File use S3, it is great.
If you have JSON values, Use DynamoDb.
AWS Dynamodb
If your JSON object is less than 4KB, DynamoDB is significantly faster than S3 for individual operations.Refer to this Link.
But yes, No RDS if you have only JSON. NoSQL is great. [Dynamodb]

Related

How to edit a S3 CSV file without downloading? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I would like to edit (add a column to) a CSV file stored in S3. I managed to do this by downloading the file, editing it with bash command and re-uploading it to S3.
But is there a better way to do this?
Is there a better way to do this ?
No. S3 is an object storage solution, not a file system. To modify objects, you download them, modify locally and re-upload.
Having said that, you can use third party tools, such as s3fs-fuse which can provide "file-like" interface for you to S3, but the underlying S3 object modification does not change.
If you do it often, you can modify S3 object from EC2 instances instead of downloading them to your local workstation outside of AWS.

What services would best be used to collect, transform, and store media player logs? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
A video player sends the server log data about what the user has been doing (start, pause, play, playing, etc.)
Sending the logs to the server and storing them in the DB, then running queued jobs to calculate stats on these has worked... okay, so far.
It's clear there should be some sort of optimization here. What services provide the best custom log storage?
What would be the best manual option? Considering running some Lambda functions and storing in AWS (RDS?) manually, but wondering if the maintenance of such a service is warranted.
I would store the logs in AWS S3 (Storage) and then use AWS Glue (Transform) and AWS Athena for ad-hoc querying of different stats, this will still work out cheaper than using a traditional database approach plus it has a lot of other advantages.

Amazon S3 for images/videos or server [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I notice that a lot of people prefer upload the images and videos in storage services like AWS S3. Those images are such a slider, logo, product's images, random images, etc.
What is the big difference to upload those images in the server or in a services like S3?
Prices? Bandwidth? Access? It is more fast? Scalability?
Thanks
Please read it,
https://www.linkeit.com/blog/what-is-amazon-s3-and-its-benefits
in case if you will use your server for static files you will need maintain
Scalability
Security
Backups ( You need do it reliable and durable )
And many other

Where to save images within AWS? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I was discussing with friends about the best way to store files in amazon.
I believed that an s3 bucket was the best way to save static files as images from a website.
But friends said that it is not the best way to work with s3 because of the high cost of having this image requested many times.
I need to know the best place to save images that will be rendered inside my site (which is inside an EC2 instance).
Could someone clarify this doubt? Saving images inside the S3 in sites that have many requests is expensive?
For storing static files like images AWS S3 is one of the best option.
S3 is one of the cheapest cloud storage, you won't be charged for the number of times it's read, only amount of outbound traffic will bbe charged. For get requuest/put request there is also a charge, but you shouldn't need it as per my understanding, you can clarify your use case more precisely. You can also calculate the price here.
Find all the storage services AWS offers here: https://aws.amazon.com/products/storage/

How to use Google DLP API to delete sensitive content from data stored in Google Big Query? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have a certain table in Google Big Query which has some sensitive fields. I read and understood about inspection of data but cannot find a way to redact the data using DLP API directly in BigQuery database.
Two questions:
Is it possible to do it just using DLP API?
If not, what is the best way to fix data in a table which runs into Terabytes?
The API does not yet support de-identifying bigquery directly.
You can however write a dataflow pipeline that leverages content.deidentify. If you batch your rows utilizing Table objects (https://cloud.google.com/dlp/docs/reference/rest/v2/ContentItem#Table) this can work pretty efficiently.