I am exploring network security on GCP, Can anybody please explain how to create a GCS bucket under VPC or how to configure a VPC on GCS bucket?
GCS buckets are not something you can assign to a specific GCP VPC, they are either available via API (storage.googleapis.com) or by using the GCP web ui interface.
If you need to access them from a GCP VM you would need to use the right permissions (service account or gcloud auth) along with the gsutil utility.
Security for GCS is mostly just account (service account or GCP account) permissions and / or group permissions to your bucket / bucket files, for example, if you allow read permissions to "AllUsers" group, then everyone will have the ability to read and / or download that file at your expense as long as they have the specific GCS link.
Also, there's GCP Cloud Filestore if NFS over your VPC fit your needs.
Related
I want to access the data residing in a AWS s3 bucket from GCP Cloud Composer Environment's Service Account.
I followed this link. But this also uses the key creation inside.
Is there a way to connect to AWS S3 from GCP via roles only?
How to access, upload and delete objects of the S3 bucket from the web URL securely?
We are accessing the objects in S3 from our Application. But that bucket is public which is not secure.
I have tried CloudFront with OAI on the s3 bucket and putting bucket private but access is denied from the application when trying to upload an object to the s3 bucket.
We want to upload and delete objects in s3 bucket. We want that bucket to private only. And we want to do this from web applications only not from CLI, not from any tool. How could we achieve this?
Any suggestion will be appreciated.
Thanks!
Your application can use an AWS SDK to communicate directly with AWS services.
Your application will require a set of credentials to gain access to resources in your AWS Account. This can be done in one of two ways:
If your application is running on an Amazon EC2 instance, assign an IAM Role to the instance
Otherwise, create an IAM User and store the AWS credentials on the application server by using the AWS CLI aws configure command
You can control the exact permissions and access given to the IAM Role / IAM User. This can include granting the application permission to access your Amazon S3 buckets. This way, the buckets can be kept private, but the application will be able to upload/download/delete objects in the bucket.
To add more to the previous answer, you can find many S3 SDK examples in the AWS Github located here:
https://github.com/awsdocs/aws-doc-sdk-examples
If you look under each programming language, you will find Amazon S3 examples. You can use the AWS SDK to perform actions on a bucket when its private. You can take security a step further too and use encryption as shown here:
https://github.com/awsdocs/aws-doc-sdk-examples/blob/master/java/example_code/s3/src/main/java/aws/example/s3/S3EncryptV2.java
Also, you can interact with Amazon S3 bucket from a web app as well by using the AWS SDK. Here is an example of building a web app using Spring Boot that interacts with an Amazon S3 bucket by reading all of the objects in the bucket.
Creating an example AWS photo analyzer application using the AWS SDK for Java
It's a bad practice to use long term credentials. AWS recommends to use short term credentials along with STS. Here is an article using Python/Flask to upload a file into private S3 bucket using STS/short term credentials.
Connect on-premise Python application with AWS services using Roles
I could have listed down all the steps in this post. But, it's a bit too long and so the reference to the above link.
Can i connect to different account AWS services(s3, dynamoDb) from my account ec2 using VPC Endpoint?
Amazon S3 and Amazon DynamoDB are accessed on the Internet via API calls.
When a call is made to these services, a set of credentials is provided to identify the account and user.
If you wish to access S3 or DynamoDB resources belonging to a different account, you simply need to use credentials that belong to the target account. The actual request can be made from anywhere on the Internet (eg from Amazon EC2 or from a computer under your desk) — the only things that matters is that you have valid credentials linked to the desired AWS account.
There is no need to manipulate VPC configurations to access resources belonging to a different AWS Account. The source of the request is actually irrelevant.
I am trying to explore more on AWS S3 side and got one doubt. Is it possible to export data from EC2 windows instance hosted in one account to S3 bucket hosted in another AWS account? I know one way is using external modules like tntdrive where I can map S3 as mounted drive and export data. Looking for another good solution if S3 provides, so if someone knows this, please place your suggestions.
I think all you would need in your EC2 instance is access to the AWS credentials of the other account, then you could copy your file from your EC2 instance to an S3 bucket for the second account. You may also be able to do it if you grant the identity associated with the EC2 instance rights to an S3 bucket owned by the second account - then you could just write to that bucket "yourself" ...
I have an odd request from management (is there any other kind?) and was hoping to get some info here.
Is there a way to mount an EFS volume from AWS account A to a windows EC2 instance in AWS account B?
This is now supported in AWS EFS - https://aws.amazon.com/about-aws/whats-new/2018/11/amazon-efs-now-supports-access-across-accounts-and-vpcs/
Unfortunately you cannot.
From Authentication and Access Control for Amazon EFS
Cross-account administration – You can use an IAM role in your account
to grant another AWS account permissions to administer your account’s
Amazon EFS resources. For an example, see Tutorial: Delegate Access
Across AWS Accounts Using IAM Roles in the IAM User Guide. Note that
you can't mount Amazon EFS file systems from across VPCs or accounts.
For more information, see Managing File System Network Accessibility