I want to access the data residing in a AWS s3 bucket from GCP Cloud Composer Environment's Service Account.
I followed this link. But this also uses the key creation inside.
Is there a way to connect to AWS S3 from GCP via roles only?
Related
How to access, upload and delete objects of the S3 bucket from the web URL securely?
We are accessing the objects in S3 from our Application. But that bucket is public which is not secure.
I have tried CloudFront with OAI on the s3 bucket and putting bucket private but access is denied from the application when trying to upload an object to the s3 bucket.
We want to upload and delete objects in s3 bucket. We want that bucket to private only. And we want to do this from web applications only not from CLI, not from any tool. How could we achieve this?
Any suggestion will be appreciated.
Thanks!
Your application can use an AWS SDK to communicate directly with AWS services.
Your application will require a set of credentials to gain access to resources in your AWS Account. This can be done in one of two ways:
If your application is running on an Amazon EC2 instance, assign an IAM Role to the instance
Otherwise, create an IAM User and store the AWS credentials on the application server by using the AWS CLI aws configure command
You can control the exact permissions and access given to the IAM Role / IAM User. This can include granting the application permission to access your Amazon S3 buckets. This way, the buckets can be kept private, but the application will be able to upload/download/delete objects in the bucket.
To add more to the previous answer, you can find many S3 SDK examples in the AWS Github located here:
https://github.com/awsdocs/aws-doc-sdk-examples
If you look under each programming language, you will find Amazon S3 examples. You can use the AWS SDK to perform actions on a bucket when its private. You can take security a step further too and use encryption as shown here:
https://github.com/awsdocs/aws-doc-sdk-examples/blob/master/java/example_code/s3/src/main/java/aws/example/s3/S3EncryptV2.java
Also, you can interact with Amazon S3 bucket from a web app as well by using the AWS SDK. Here is an example of building a web app using Spring Boot that interacts with an Amazon S3 bucket by reading all of the objects in the bucket.
Creating an example AWS photo analyzer application using the AWS SDK for Java
It's a bad practice to use long term credentials. AWS recommends to use short term credentials along with STS. Here is an article using Python/Flask to upload a file into private S3 bucket using STS/short term credentials.
Connect on-premise Python application with AWS services using Roles
I could have listed down all the steps in this post. But, it's a bit too long and so the reference to the above link.
I am exploring network security on GCP, Can anybody please explain how to create a GCS bucket under VPC or how to configure a VPC on GCS bucket?
GCS buckets are not something you can assign to a specific GCP VPC, they are either available via API (storage.googleapis.com) or by using the GCP web ui interface.
If you need to access them from a GCP VM you would need to use the right permissions (service account or gcloud auth) along with the gsutil utility.
Security for GCS is mostly just account (service account or GCP account) permissions and / or group permissions to your bucket / bucket files, for example, if you allow read permissions to "AllUsers" group, then everyone will have the ability to read and / or download that file at your expense as long as they have the specific GCS link.
Also, there's GCP Cloud Filestore if NFS over your VPC fit your needs.
Can i connect to different account AWS services(s3, dynamoDb) from my account ec2 using VPC Endpoint?
Amazon S3 and Amazon DynamoDB are accessed on the Internet via API calls.
When a call is made to these services, a set of credentials is provided to identify the account and user.
If you wish to access S3 or DynamoDB resources belonging to a different account, you simply need to use credentials that belong to the target account. The actual request can be made from anywhere on the Internet (eg from Amazon EC2 or from a computer under your desk) — the only things that matters is that you have valid credentials linked to the desired AWS account.
There is no need to manipulate VPC configurations to access resources belonging to a different AWS Account. The source of the request is actually irrelevant.
Suppose I create a vpc and a vpc-endpoint in region1.
Can I communicate to an s3-bucket-in-region2 using this vpc-endpoint, i.e. without using the internet?
No, VPC endpoints to not support cross region requests. Your bucket(s) need to be in the same region as the VPC.
Endpoints for Amazon S3
Endpoints currently do not support cross-region requests—ensure that
you create your endpoint in the same region as your bucket. You can
find the location of your bucket by using the Amazon S3 console, or by
using the get-bucket-location command. Use a region-specific Amazon S3
endpoint to access your bucket; for example,
mybucket.s3-us-west-2.amazonaws.com. For more information about
region-specific endpoints for Amazon S3, see Amazon Simple Storage
Service (S3) in Amazon Web Services General Reference.
Can you connect to S3 via s3cmd or mount S3 to and ec2 instance with IAM users and not using access keys?
All the tutorials I see say to use access keys but what if you can't create your own access keys (IT policy).
There are two ways to access data in Amazon S3: Via an API, or via URLs.
Via an API
When accessing Amazon S3 via API (which includes code using an AWS SDK and also the AWS Command-Line Interface (CLI)), user credentials must be provided in the form of an Access Key and a Secret Key.
The aws and s3cmd utilities, and also software that mounts Amazon S3 as a drive, require access to the API and therefore require credentials.
If you have been given a login to an AWS account, you should be able to ask your administrators to also create credentials that are associated with your User. These credentials will have exactly the same permissions as your normal login (via Username/password), so it's strange that they would be disallowing it. They can be very useful for automating AWS activities, such as starting/stopping Amazon EC2 instances.
Via URLs
Objects stored in Amazon S3 can also be made available via a URL that points directly to the data, eg s3.amazonaws.com/bucket-name/object.txt
To provide public access to these objects without requiring credentials, either add permission to each object or create a Bucket Policy that grants access to content within the bucket.
This access method can be used to retrieve individual objects, but is not sufficient to mount Amazon S3 as a drive.