S3 limit to objects in a directory of a bucket - amazon-web-services

I want to save all of my images in a directory of a bucket. Is the number of objects in a same directory unlimited?
for example:
/imgs/10000000.jpg
/imgs/10000001.jpg
/imgs/10000002.jpg
....
/imgs/99999999.jpg

Yes, the number of objects is unlimited. As John mentioned elsewhere, the entire S3 "file path" is really just one string internally, the use of the / as a path separator is just convention.
One suggestion I'd have to make use of this effectively is to name each image a ULID - https://github.com/ulid/spec - this gives you a couple of advantages:
you don't need to worry about uniqueness, even if you put images in from multiple servers
because the ULIDs are lexicographic and time based, you can query S3 directly to see which images were uploaded when (you can generate ULIDs for the start and end timestamp and call S3's LIST to get the images between them).
it's easier to handle reports and metrics – you can easily find out which images are new, because they'll have a ULID after that period's timestamp.

There is no limit on the number of objects stored in Amazon S3.
Amazon S3 does not actually have 'directories'. Instead, the Key (filename) of an object includes the full path of the object. This actually allows you to create/upload objects to a non-existent directory. Doing so will make the directory 'appear' to exist, but that is simply to make things easier for us humans to understand.
Therefore, there is also no limit on the number of objects stored in a 'directory'.

Related

AWS console how to list s3 bucket content order by last modified date

I am writing files to a S3 bucket. How can I see the newly added files? E.g. in the below pic, you can see the files are not ordered by Last modified field. And I can't find a way to do any sort on that field or any other field.
You cannot sort on that, it is just how the UI works.
The main reason being that for buckets with 1000+ objects the UI only "knows" about the current 1000 elements displayed on the current page. And sorting them is meaningless because it would imply to show you the newest or oldest 1000 objects of the bucket but in fact it would just order the currently displayed 1000 objects. That would really confuse people and it is better to not let the user sort instead of sorting incorrectly.
Showing the actual 1000 newest or oldest objects requires you to list everything in the bucket, which takes time (minutes or hours for larger buckets) and backend requests and incurs more of a cost since List requests are billed. If you want to retrieve the 1000 newest or oldest you need to write code to do a full listing on the bucket or the prefix, then order all objects and then display parts of the result.
If you can sufficiently decrease the number of displayed objects with the "Find objects by prefix" field, the sort options become available and meaningful.

Why are tables segmented when exporting to parquet from AWS RDS

We use Python's boto3 library to execute start_export_task to trigger a RDS snapshot export (to S3). This successfully generates a directory in S3 that has a predicable, expected structure. Traversing down through that directory to any particular table directory (as in export_identifier/database_name/schema_name.table_name/) I see several .parquet files.
I download several of these files and convert them to pandas dataframes so I can look at them. They are all structured the same and seem to clearly be pieces of the same table. But they range in size from 100KB to 8MB in seemingly unpredictable size segments. Do these files/'pieces' of the table account for all its rows? Do they repeat/overlap at all? Why are they segmented so (seemingly) randomly? What parameters control this segmenting?
Ultimately I'm looking for documentation on this part of parquet folder/file structure. I've found plenty of information on how individual files are structured and partitioning. But I think this falls slightly outside of those topics.
You're not going to like this, but from AWS' perspective this is an implementation detail and according to the docs:
The file naming convention is subject to change. Therefore, when reading target tables we recommend that you read everything inside the base prefix for the table.
— docs
Most of the tools that work with Parquet don't really care about the number or file names of the parquet files. You just point something like Spark or Athena to the prefix of the table and it will read all the files and figure out how they fit together.
In the API there are also no parameters to influence this behavior. If you prefer a single file for aesthetic reasons or others, you could use something like a Glue Job to read the table prefixes, coalesce the data per table in a single file and write it to S3.

What are the differences between Object Storages for example S3 and a columnar based Technology

I was thinking about the difference between those two approches.
Imagine you must handle information about pattern calls, which later should be
displayed to the user. A pattern call is a tuple consisting of a unique integer
identifier ("id"), a user defined name (“name"), a project relative path to the so
called pattern file ("patternFile") and a convenience flag, which states whether
the pattern should be called or not called. And the number of tuples are not known before and they won't be modified after initialization.
I thought that in this case a column based approach with big query for example would be better in terms of I/O and performance as well as the evolution of the schema. But actually I can't understand why. I would appreciate any help.
Amazon S3 is like a large key-value store. The Key is the filename (with full path) and the Value is the contents of the file. It's just a blob of data.
A columnar data store organizes data in such a way that specific data can be "jumped to", and only desired values need to be read from disk.
If you are wanting to perform a search on the data, then some form of logic is required on the data. This could be done by storing data in a database (typically a proprietary format) or by using a columnar storage format such as Parquet and ORC plus a query engine that understands this format (eg Amazon Athena).
The difference between S3 and columnar data stores is like the difference between a disk drive and an Oracle database.

Find the size of S3 folders using PHP SDK

I am using an S3 bucket but I can't find a way to retrieve the size of a specific folder inside my bucket.
Scénario is:
I have a doc for every user on my website /user1..../user2 where each one will have a limited amount of space (1gb per folder). I need to show on my website the space that they still have like:
Canheadd = consumed space (what I'm lookingfor) + new file size
to determine if the user still has space or not to them to upload a new file.
I'm aware that you can do a loop of objectlist but for me it's not the ideal thing because of the amount and size of each document.
Any new or direct solution is welcome.
There is no 'quick' way to obtain the amount of storage used in a particular 'folder'.
The correct way would be to call ListObjects(Prefix='folder/',...), iterate through the objects returned and sum the size of each object. Please note that each call returns a maximum of 1000 objects, so the code might need to make repeated calls to ListObjects.
If this method is too slow, you could maintain a database of all objects and their sizes and query the database when the app needs to determine the size. Use Amazon S3 Events to trigger an AWS Lambda function when objects are created/deleted to keep the database up-to-date. This is a rather complex method, so I would suggest the first method unless there is a specific reason why it is not feasible.

DataPrep: access to source filename

Is there a way to create a column with filename of the source that created each row ?
Use-Case: I would like to track which file in a GCS bucket resulted in the creation of which row in the resulting dataset. I would like a scheduled transformation of the files contained in a specific GCS bucket.
I've looked at the "metadata article" on GCP but it is pretty useless for my use-case.
UPDATED: I have opened a feature request with Google.
While they haven't closed that issue yet, this was part of the update last week.
There's now a source metadata reference called $filepath—which, as you would expect, stores the local path to the file in Cloud Storage (starting at the top-level bucket). You can use this in formulas or add it to a new formula column and then do anything you want in additional recipe steps.
There are some caveats, such as it not returning a value for BigQuery sources and not persisting through pivot, join, or unnest . . . but it covers the vast majority of use cases handily, and in other cases you just need to materialize it before some of those destructive transforms.
NOTE: If your data source sample was created before this feature, you'll need to generate a new sample in order to see it in the interface (instead of just NULL values).
Full notes for these metadata fields are available here: https://cloud.google.com/dataprep/docs/html/Source-Metadata-References_136155148