Objects Not Visible Within S3 Bucket for GroundTruth Labeling Job - amazon-web-services

I am currently creating a GroundTruth Labeling job, and am following the tutorial
https://www.youtube.com/watch?v=_FPI6KjDlCI&t=210s
I have created the same bucket ground-truth-example-labeling-job and uploaded jpg files within the bucket. Within this tutorial, under Select S3 bucket or resource, they were able to go within the S3 Bucket and access the jpg files inside.
However, I am able to go inside the ground-truth-example-labeling-job bucket, but no jpg files are visible for me to select. The entire bucket is empty with nothing to select.
Is this a permissions settings problem?

You cannot select the files.
But if you have a folder within a bucket then you can select that folder which consists of the input data.
In the video they selected the bucket but not the files.

Related

Unable to share the Google Bigquery bucket and folders to the vendor

I have created a BigQuery bucket and inside that bucket, I have created a folder. I want to share this folder with the vendor so that they can send the data in.
As of now, they are able to access the bucket but not the folder inside the bucket.
Can you please help what permissions I need to give so that they can load the data into the bucket (and the folder)
Thanks,
Radhika
First of all, it's not a bigquery bucket but a Cloud Storage bucket. Then, in Cloud Storage, folder doesn't exist (all the file are stored at the bucket level, the / character is a human representation for folder and the UI display them like that, but they aren't a true existence, it's virtual (you can convince yourselves by creating a single file in the folder and then by deleting the file, the folder also disappear).
All of that to say that the folder doesn't exist and it's not a resource on which you can grant permissions (or ACL). The Bucket is the lower level of resources in CLoud Storage.
So the user need the storage.objectViewer or storage.objectCreator or storage.objectAdmin (to read and Write) roles to read or write object on Cloud Storage

Different permissions same S3 bucket, parquet files

Problem
I have multiple files in the same S3 bucket. When I try to load one file into Snowflake, I get a "access denied" error. When I try a different file (in the same bucket), I can successfully load into Snowflake.
The file highlighted does not load into Snowflake.
This is the error
Using a different file but in the same bucket, I can successfully load into Snowflake.
Known Difference: The file that does not work was generated by AWS. The file that can be loaded into Snowflake was generated by AWS, saved to my local then reuploaded to the bucket.
The only difference is I brought it down to my local machine.
Question: Is there a known file permission on parquet files? Why does this behavior go away when I download and upload to the same bucket.
It cannot be an S3 bucket issue. It has to be some encoding on the parquet file.
Question: Is there a known file permission on parquet files? Why does
this behavior go away when I download and upload to the same bucket.
It cannot be an S3 bucket issue. It has to be some encoding on the
parquet file.
You are making some bad assumptions here. Each S3 object can have separate ACL (permission) values. You need to check what the ACL settings are by drilling down to view the details of each of those objects in S3. My guess is AWS is writing the objects to S3 with a private ACL, and when you re-uploaded one of them to the bucket you saved it with a public ACL.
Turns out I needed to add KMS permissions to the user accessing the file.

Create directory of username in s3 from django

I want to create directory on S3 using the name fetched from Django SQL database. I was trying to do this but it shows {} as the S3 bucket name. How should I resolve this?
There is no concept of folders or directories in S3. Management console just present it like that. In Amazon S3, we only have buckets and objects.
Still if we want to create folders, we can create any object like "abc/xyz/uvw/123.jpg", which many S3 access tools like S3 Management Console, S3Fox show like a directory structure, but it's actually just a single file in a bucket. Refer https://boto3.amazonaws.com/v1/documentation/api/latest/guide/s3-uploading-files.html
Reference & Credit : https://stackoverflow.com/a/2141499/5647272

Lots of files appearing in my Amazon S3 bucket

Objects and files names in my S3 bucket changed from my selected names to those displayed in the screenshot below.. And now when I update a file, it uploads successfully but doesn't change, the date modified is not changed neither are the changes in the codes are visible on the web page. Can someone please help me find out what happens to this bucket and how can I fix it?
The files you are showing are created by Amazon S3 bucket logging, which creates log files of access requests to Amazon S3.
Logging is activated within the Properties panel of your bucket, where you can nominate a target bucket and prefix for the logs.
So, your files are not being renamed. Rather, they are additional log files that are generated by Amazon S3.
If they are in the same location as your files, things will get confusing! Your files are still in there, but probably later in the naming scheme.
I would recommend:
Go into the bucket's properties
If you do not need the logs, then disable bucket logging
If you wish to keep the logs, configure them to write to a different bucket, or the same bucket but with a prefix (directory)
Delete or move the existing log files so that you will be left with just your non-log files

Scrolling through a bucket

I have a large bucket on AWS. The program I'm using EVS requires that all videos and photos must be inone bucket instead of creating new buckets.
So my bucket now has a crapload of stuff in there. Is there a way that I can just skip to the item I want in the bucket without having to scroll through the entire bucket.
thanks
If you're using the AWS web console, you can just start typing and it will filter the files in the bucket.