I'm in the process of developing a web application (using Angular6) that uses aws amplify.
The storage module provided by amplify lets you store your files in three protective levels (public, protected & private). I have a requirement to process an uploaded file via a lambda function.
My questions is whether the s3 buckets (and 'folders') created via the amplify available to Lambda functions (as the buckets are encrypted to use only via the app)??
would changing CORS on the S3 bucket do the trick?. Any help appreciated.
An S3 bucket that is created by Amplify CLI is like any other S3 bucket. You can access it provided that appropriate permissions are in place. https://docs.aws.amazon.com/AmazonS3/latest/user-guide/set-permissions.html
Related
I am building an Android/IOS app with Flutter where an AWS Amplify backend is used to store files that can be accessed by the app. So far this was working, I had set up an S3 bucket of which I could access all files stored under the /public folder.
I recently added the amplify rest api plugin by running amplify add api, it asked me whether I wanted access to other resources in my project, and I said yes. After that the amplify cli configured the rest api plugin.
After that I lost access to the original S3 bucket from my app. I noticed that a new bucket was created, I unlinked it and linked the original S3 bucket to my backend app in Amplify studio, but the app can still not access any file in the S3 bucket, I always get an Access Denied error. I checked the permissions and they are still the same. What am I not understanding, what must I do in order to gain access again to the S3 bucket?
How to access, upload and delete objects of the S3 bucket from the web URL securely?
We are accessing the objects in S3 from our Application. But that bucket is public which is not secure.
I have tried CloudFront with OAI on the s3 bucket and putting bucket private but access is denied from the application when trying to upload an object to the s3 bucket.
We want to upload and delete objects in s3 bucket. We want that bucket to private only. And we want to do this from web applications only not from CLI, not from any tool. How could we achieve this?
Any suggestion will be appreciated.
Thanks!
Your application can use an AWS SDK to communicate directly with AWS services.
Your application will require a set of credentials to gain access to resources in your AWS Account. This can be done in one of two ways:
If your application is running on an Amazon EC2 instance, assign an IAM Role to the instance
Otherwise, create an IAM User and store the AWS credentials on the application server by using the AWS CLI aws configure command
You can control the exact permissions and access given to the IAM Role / IAM User. This can include granting the application permission to access your Amazon S3 buckets. This way, the buckets can be kept private, but the application will be able to upload/download/delete objects in the bucket.
To add more to the previous answer, you can find many S3 SDK examples in the AWS Github located here:
https://github.com/awsdocs/aws-doc-sdk-examples
If you look under each programming language, you will find Amazon S3 examples. You can use the AWS SDK to perform actions on a bucket when its private. You can take security a step further too and use encryption as shown here:
https://github.com/awsdocs/aws-doc-sdk-examples/blob/master/java/example_code/s3/src/main/java/aws/example/s3/S3EncryptV2.java
Also, you can interact with Amazon S3 bucket from a web app as well by using the AWS SDK. Here is an example of building a web app using Spring Boot that interacts with an Amazon S3 bucket by reading all of the objects in the bucket.
Creating an example AWS photo analyzer application using the AWS SDK for Java
It's a bad practice to use long term credentials. AWS recommends to use short term credentials along with STS. Here is an article using Python/Flask to upload a file into private S3 bucket using STS/short term credentials.
Connect on-premise Python application with AWS services using Roles
I could have listed down all the steps in this post. But, it's a bit too long and so the reference to the above link.
Use Case:
I want to be able to:
Upload images and audio files from my backend to S3 bucket
List and view/play content on my backend
Return the objects URLs in API responses
Mobile apps can view/play the URLs with or without? authentication from the mobile side
Is that possible without making the S3 bucket public ?
Is that possible without making the S3 bucket public ?
Yes, it should be possible. Since you are using EC2 instance for backend, you, you can setup instance role to enable private and secure access of your backed application to S3. In the role, you would allow S3 read/write. This way, if your application is using AWS SDK, you can seamlessly access S3 without making S3 public.
Regarding the links to the object, the general way is to return S3 pre-signed links. This allows for temporary access to your objects without the need for public access. The alternative is to share your objects through CloudFront as explained in Amazon S3 + Amazon CloudFront: A Match Made in the Cloud. In either case, bucket can be private.
I want to copy a folder with large files in it to azure storage.
I found this article that shows how to copy a public aws bucket to azure: https://microsoft.github.io/AzureTipsAndTricks/blog/tip220.html
But how can I do this, if the aws bucket is private? How can I pass the credentials to azcopy for it to copy my files from aws bucket to azure directly?
From Copy data from Amazon S3 to Azure Storage by using AzCopy | Microsoft Docs:
AzCopy is a command-line utility that you can use to copy blobs or files to or from a storage account. This article helps you copy objects, directories, and buckets from Amazon Web Services (AWS) S3 to Azure blob storage by using AzCopy.
The article explains how to provide both Azure and AWS credentials.
One thing to note is that Amazon S3 cannot 'send' data to Azure Blob Storage. Therefore, something will need to call GetObject() on S3 to retrieve the data, and then send it to Azure. I'm assuming that Azure Blob Storage cannot directly request data from Amazon S3, so it means that the data will be 'downloaded' from S3, then 'uploaded' to Azure. To improve efficiency, run the AzCopy command either from an AWS or an Azure virtual machine, to reduce the latency of sending via your own computer.
One solution, albeit not an ideal one is that you could request an AWS Snowball with your bucket data on it, then use Azure Import/Export service to send the Snowball to Azure for ingestion.
Have you tried generating a pre-signed url with limited ttl on it for the duration of the copy?
https://docs.aws.amazon.com/AmazonS3/latest/dev/ShareObjectPreSignedURL.html
that way you can just execute azcopy <aws_presigned_url> <azure_sas_url>
the url contains the information you need for authn/authz on both sides.
I have an web application where i want to put my static content on s3 bucket. I need to access my s3 content with my Ec2 Instance. S3 bucket may contain image, songs and videos. I need to show all the content of S3 buckets on websites.
If you want to access it by ssh-ing into the EC2 instance, you can use the aws s3 cli or s3cmd.
If you want to include S3 content in a website and make it publicly visible, note that you can access your S3 files using HTTP (see here for instructions):
https://s3.amazonaws.com/bucket/filepath/filename
You can use the AWS SDK for PHP to manage your S3...
PHP S3 SDK
http://docs.aws.amazon.com/aws-sdk-php/v2/guide/service-s3.html
If you want to interact with S3 directly from your PHP code, you are probably looking the sdk for PHP provided by aws.
This SDK permits you to directly interact with AWS components. The documentation for S3.