I am using sitespeed.io to measure performance of some of my static URL's and uploading the results into AWS S3 bucket.
Inside AWS S3 bucket it is structured exactly how sitespeed outputs the data. I have attached an image for it and when you go inside that folder what you see is index.html which shows all the details of the pages configured.
Now I have a page which has a calendar and when you select any date, it should show all the folders of that date. When you click on that folder it should open the sitespeed.io summary page. I've attached that as well. Unfortunately I can't share anything else from the code.
What i want to know is it possible to trigger a lambda function using a rest API which will fetch the result from S3 or dynamoDB and display on the front end for user to view?
Related
I have a React Amplify site with Storage and a Cognito user pool. When a user on my application uploads a picture it triggers a lambda function that modifies the picture and writes it to a public S3 bucket not related to the Amplify bucket I created. I want to update my lambda function to write the file back into their own personal path associated with the Amplify Storage I setup. I want to get rid of the public S3 bucket altogether.
I am currently utilizing boto3 functions assuming I know the name of the Bucket Key path like this:
bucket.upload_file(Lambda Temp Location , User Bucket Item Key)
I had a custom function to upload their files to a public S3 bucket, but in switching to the storage I setup with Amplify I noticed that the bucket names are unique to the user. Researching online I checked the IAM role permissions and I see that the bucket key path looks like this below.
private/${cognito-identity.amazonaws.com:sub}
How can I get the ${cognito-identity.amazonaws.com:sub} into my lambda function? I thought I could use the Sub id to append to the link but it doesn't match what I see in S3. I was thinking of sending this detail if possible from my js script to the API call or getting that info within the Lambda itself via matching to user attributes or something... Any help would be much appreciated.
Thank you!!!!!!
It represents IdentityID from the credentialsProvider. You can check: https://aws-amplify.github.io/aws-sdk-ios/docs/reference/AWSCore/Classes/AWSCognitoCredentialsProvider.html
I am using Django boto3 module to upload images and videos to AWS S3 and also using cloudfront CDN.
User create their account and upload images and videos to AWS S3 , but i want to put a check and implement admin approval for video and images .
Currently, the images and videos uploaded in AWS s3 via Django app is public by default.
Can it be possible via AWS management console or AWS cli to implement admin approval for images and videos?
Please help.
use some specific prefix (like "unapproved") when user uploads files .
create one application(Admin Panel) on web/mobile, there you can show/list of image files prefixed with "unapproved".
now check and approve ( one button) .. after approve, copy original file and rename it with "approved" prefix or simply without prefix to s3 and delete old one..
I'd like to I have a Lambda function (index.js) that is triggered by DynamoDB and gets an item from the table and passes that data to an index.html in the same folder, updating its content. is it possible to have that index.html file be hosted at the same time as being accessible by this lambda?
My goal is to trigger a Lambda each time a new entry is made into my DynamoDB table. I want the Lambda to simply update an element in an HTML file (eg, a header) based on data I get from a read operation on the DynamoDB table.
I plan to host the HTML file on AWS, via S3 or AWS Amplify. I am not too fussed which option I use. I just want the lambda to automatically change an element in that hosted HTML file when triggered by the DynamoDB table.
I have researched the documentation and on here, but can't find a solution for this. Many mention using REST API etc, but those same suggestions are for building WebApps with way more detail and functionality than what I want. Is this possible without using Amazon API Gateway etc. Can I not just access the hosted HTML file from within the Lambda?
I try and succeed to upload a file using AWS Amplify quick start doc and I used this example to set my graphql schema, my resolvers and dataSources correctly: https://github.com/aws-samples/aws-amplify-graphql.
I was stuck for a long time because of an error response "Access Denied" when my image was uploading into the S3 bucket. I finally went to my S3 console, selected the right bucket, went to the Authorization tab, and clicked on "Everyone" and finally selected "Write Object". With that done, everything works fine.
But I don't really understand why it's working, and Amazon show me a big and scary alert on my S3 console now saying "We don't recommend at all to make a S3 bucket public".
I used Amazon Cognito userPool with Appsync and it's inside my resolvers that the image is upload to my S3 bucket if i understood correctly.
So what is the right configuration to make the upload of an image work?
I already try to put my users in a group with the access to the S3 bucket, but it was not working (I guess since the user don't really directly interact with my S3 bucket, it's my resolvers who do).
I would like my users to be able to upload an image, and after displaying the image on the app for everybody to see (very classical), so I'm just looking for the right way to do that, since the big alert on my S3 console seems to tell me that turning a bucket public is dangerous.
Thanks!
I'm guessing you're using an IAM role to upload files to S3. You can set the bucket policy to allow that role with certain permissions whether that is ReadOnly, WriteOnly, etc.
Take a look here: https://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html
Ok I find where it was going wrong. I was uploading my image taking the address of my S3 bucket with the address that was given by aws-exports.js.
BUT, when you go to your IAM role policy, and you check the role of your authorize user of your cognito pool, you can see the different strategies and the one that allow to put objects on your S3 bucket use the folders "public", "protected" and "private".
So you have to change those path or add these folder at the end of your bucket address you use on your front-end app.
Hope it will help someone!
I have two buckets with large quantity of pdf files. And, I wanted to make these searchable with file name and content after indexing all documents. I tried using the CloudSearch but it appeared to be useful for same data type. Please guide me how I can make documents searchable in amazon s3 bucket for a domain name or using any web browser.
CloudSearch can index PDFs. You can submit that data from S3 buckets using the AWS CLI or the web console. This functionality is documented here http://docs.aws.amazon.com/cloudsearch/latest/developerguide/uploading-data.html
If you want something automated, AWS Lambdas can monitor your buckets for changes and submit new documents for indexing.