Get Amazon Resource Name - amazon-web-services

Is there a way to list the Amazon Resource Name (ARN) of an S3 Bucket from the web GUI?
I know I can piece it together myself, but that just seems unnecessary. Ideally, I could go to the S3 instance page and copy and paste the ARN. I've looked in the properties page of the bucket, but I'm not seeing anything that looks useful there.

No, the current S3 console does not expose bucket ARNs. You could probably add a feature to the S3 console page to yield ARNs with a simple GreaseMonkey script.

Related

Using S3 bucket as a file server for the public

Use-case
We basically want to collect files from external customers into a file server.
We were thinking of using the S3 bucket as the file server that customers can interact with directly.
Question
Is it possible to accomplish this where we create a bucket for each customer, and he can be given a link to the S3 bucket that also serves as the UI for him to drag and drop his files into directly?
He shouldn't have to log-in to AWS or create an AWS account
He should directly interact with only his S3 bucket (drag-drop, add, delete files), there shouldn't be a way for him to check other buckets. We will probably create many S3 buckets for our customers in the same AWS account. His entry point into the S3 bucket UI is via a link (S3 bucket URL perhaps)
If such a thing is possible - would love some general pointers as to what more I should do (see my approach below)
My work so far
I've been able to create an S3 bucket - grant public access
Set policies to Get, List and PutObject into the S3 bucket.
I've been able to give public access to objects inside the bucket using their link, but never their bucket itself.
Is there something more I can build on or am I hitting a dead-end and this is not possible to accomplish?
P.S: This may not be a coding question, but maybe your answer could have code to accomplish it if at all possible, general pointers if possible
S3 presigned url can help in such cases, but you have write your own custom frontend application for drag and drop features.
Link: https://docs.aws.amazon.com/AmazonS3/latest/userguide/PresignedUrlUploadObject.html

How to access files from an Amazon S3 bucket without an account?

This might be a stupid question, but I've never used AWS before.
So apparently, to create an AWS account I need to give my credit card information, but I don't want to do that.
Is there any other way to access the information from this link?:
https://s3.console.aws.amazon.com/s3/buckets/quizdb-public/?region=us-east-1&tab=overview
The URL https://s3.console.aws.amazon.com/s3/buckets/quizdb-public/?region=us-east-1&tab=overview
is the link that will be shown in the address bar when you log into the AWS console, go to S3 and click on the bucket. If you do not have access to that specific AWS account and the AWS console you will not be able to access the information in the bucket with that URL.

Creating Amazon Kinesis Data Generator Stack on different Region

I'm trying to generate a cloudformation stack provided by AWS here. When I click the Create a Cognito User with CloudFormation button, it directs me to AWS console CloudFormation page on us-west-2 (Oregon), from there its pretty much self explanatory. The problem is, the company that I'm working on only allows work on us-west-1 (N. California). I have tried looking over the CloudFormation template itself and I cant find any region being mentioned. I have also asked this question in AWS developer forum but no one has responded, and I'm wondering if anyone here knows how to generate that particular stack on any region other than us-west-2 (oregon)? Thanks!
I found a workaround for that. I used to face the same problem, as my company policy was set to not use us-west-2, therefore I couldn't use the CloudFormation JSON script provided by Amazon Kinesis Data Generator.
What I did was:
Download CloudFormation JSON script by Amazon Kinesis Data Generator in your local machine. CloudFormation JSON script download link can be found Amazon Kinesis Data Generator Help page
Download the source code. The source code download link can be found in Amazon Kinesis Data Generator Help page.
In your AWS account, go to S3 and create a S3 bucket in the region that you are allowed to create. Name it whatever you want.
Upload the source code downloaded in step2 to the created bucket in step3.
Edit CloudFormation JSON script downloaded in step1. Inside of script, change bucket name inside of Lambda function to the name of bucket you created in step3.
Go to CloudFormation and create the stack by uploading your edited script.
One thing that you need to keep in mind implementing this workaround is that if there are any changes to source code by AWSLAB, or any newer version of source code comes to life, you will have to manually check and update it to your bucket.
I hope it was clear.
I have created JMeter plugin to publish data records in Kinesis Data Stream.
https://github.com/JoseLuisSR/awsmeter
It works very well and you don't need use any aditional AWS service to publish event in Kinesis as Kinesis Data Generator does, where you could pay aditional charges for services like Cognito, Cloudformation, Lambda that are need to build and deploy KDG.
You just need AWS IAM user with programmatic access, download JMeter and install awsmeter plugin.
If you have questions or comments let me know.
Thanks.

Understanding how AppSync + S3 work together

I try and succeed to upload a file using AWS Amplify quick start doc and I used this example to set my graphql schema, my resolvers and dataSources correctly: https://github.com/aws-samples/aws-amplify-graphql.
I was stuck for a long time because of an error response "Access Denied" when my image was uploading into the S3 bucket. I finally went to my S3 console, selected the right bucket, went to the Authorization tab, and clicked on "Everyone" and finally selected "Write Object". With that done, everything works fine.
But I don't really understand why it's working, and Amazon show me a big and scary alert on my S3 console now saying "We don't recommend at all to make a S3 bucket public".
I used Amazon Cognito userPool with Appsync and it's inside my resolvers that the image is upload to my S3 bucket if i understood correctly.
So what is the right configuration to make the upload of an image work?
I already try to put my users in a group with the access to the S3 bucket, but it was not working (I guess since the user don't really directly interact with my S3 bucket, it's my resolvers who do).
I would like my users to be able to upload an image, and after displaying the image on the app for everybody to see (very classical), so I'm just looking for the right way to do that, since the big alert on my S3 console seems to tell me that turning a bucket public is dangerous.
Thanks!
I'm guessing you're using an IAM role to upload files to S3. You can set the bucket policy to allow that role with certain permissions whether that is ReadOnly, WriteOnly, etc.
Take a look here: https://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html
Ok I find where it was going wrong. I was uploading my image taking the address of my S3 bucket with the address that was given by aws-exports.js.
BUT, when you go to your IAM role policy, and you check the role of your authorize user of your cognito pool, you can see the different strategies and the one that allow to put objects on your S3 bucket use the folders "public", "protected" and "private".
So you have to change those path or add these folder at the end of your bucket address you use on your front-end app.
Hope it will help someone!

AWS S3 website versioning

I am running a website on AWS S3 bucket. I have to update the website once in a while. At the moment, when I do the deployment I just copy the built files to my bucket and override existing ones.
Is there a way to do some versioning on these deployments? I know there is a built-in versioning S3, but it is only for individual files I think.
The best option would be that every deployment is tagged with git commit-id and I could rollback to a particular commit-id if needed.
Any ideas? Already tried to name directories with commit-id -prefix, but the problem is that index.html has to live in root dir.
If you want to use some solution for non-technical users who could rollback the previous version just by doing some clicking action in AWS console you can try to change Index document config.
For example, you have structure in bucket like this:
bucket/v1/index.html
bucket/v2/index.html
...
bucket/vN/index.html
It means, that you could only change the config in Bucket properties -> Static website hosting -> Index document -> from v2/index.html to v1/index.html.
Sounds like "just doing some clicking action in AWS console"
You can use AWS CodePipeline, and use git reverts to manage rollbacks. See this github repository for a cloudformation stack to set up a website on s3/cloudfront with something like this in place.
You can configure bucket versioning using any of the following methods:
Configure versioning using the Amazon S3 console.
Configure versioning programmatically using the AWS SDKs
Both the console and the SDKs call the REST API Amazon S3 provides to manage versioning.
Note
If you need to, you can also make the Amazon S3 REST API calls directly from your code. However, this can be cumbersome because it requires you to write code to authenticate your requests.
Each bucket you create has a versioning subresource (see Bucket Configuration Options) associated with it. By default, your bucket is unversioned, and accordingly the versioning subresource stores empty versioning configuration.
<VersioningConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
</VersioningConfiguration>
To enable versioning, you send a request to Amazon S3 with a versioning configuration that includes a status.
<VersioningConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Status>Enabled</Status>
</VersioningConfiguration>
To suspend versioning, you set the status value to Suspended.
More information here.